Hi all.
I'd like somebody to share their experience in organizing library
development, including:
1. test-driven development
2. code coverage analysis (through rcov?), which would be automathically
performed after each test
3. version control (through SVN?)
4. optional code speed analysis (like benchmarking "how long it rans",
profiling "what rans so long") after each test
5. optional packaging (through rake? rant?) and uploading to (rubyforge?
sourcefoge?)
All experiences are welcome.
Big thanks!
Victor.
Victor Shepelev wrote:
4. optional code speed analysis (like benchmarking "how long it rans",
profiling "what rans so long") after each test
See the 'profile' library provided in Ruby core.
ruby -r profile
Hi all.
I'd like somebody to share their experience in organizing library
development, including:
1. test-driven development
Yes.
Test::Unit I believe. "require 'test/unit'".
2. code coverage analysis (through rcov?), which would be automathically
performed after each test
Never used rcov, but it sounds like something you'd run via a rakefile.
3. version control (through SVN?)
Besides CVS, that's probably the most common these days. Working
without version control is like swinging on the trapeze without a net.
4. optional code speed analysis (like benchmarking "how long it rans",
profiling "what rans so long") after each test
5. optional packaging (through rake? rant?) and uploading to (rubyforge?
sourcefoge?)
Dunno about the benchmarking, but as far as optional packaging, my
guess is that you should indeed be making a gem. FWIU, when you create
a RubyForge project and upload a gem, it automagically becomes
available to the world via "gem install --remote".
···
On 5/30/06, Victor Shepelev <vshepelev@imho.com.ua> wrote:
All experiences are welcome.
Big thanks!
Victor.
Victor Shepelev wrote:
> 4. optional code speed analysis (like benchmarking "how long it rans",
> profiling "what rans so long") after each test
See the 'profile' library provided in Ruby core.
ruby -r profile
No-no 
I know, how to do it _technically_.
What I what to know is how to _organize_ all the task.
Just for now I do only unit-tests, but _when_ to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must those
tasks be ran automataically? When (after any code change, like unit tests?)
This is a question.
V.
···
From: Suraj N. Kurapati [mailto:skurapat@ucsc.edu]
Sent: Tuesday, May 30, 2006 5:22 PM
Personally, I don't think I'd want my tests run after *every* change. But you might want to look into subversion's hook scripts. (post-commit, pre-commit, etc...). The would allow the repository to run all the tests/profiling/coverage via rake and email the results to interested parties upon every commit.
BUT, I wouldn't focus too much on getting a lot of testing and profiling set up if you're just getting started. Although ruby makes most of this stuff really easy, you'll still burn time and energy that might be better focused on the real task. Don't optimize too early.
-Mat
···
On May 30, 2006, at 10:27 AM, Victor Shepelev wrote:
From: Suraj N. Kurapati [mailto:skurapat@ucsc.edu]
Sent: Tuesday, May 30, 2006 5:22 PM
Victor Shepelev wrote:
4. optional code speed analysis (like benchmarking "how long it rans",
profiling "what rans so long") after each test
See the 'profile' library provided in Ruby core.
ruby -r profile
No-no 
I know, how to do it _technically_.
What I what to know is how to _organize_ all the task.
Just for now I do only unit-tests, but _when_ to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must those
tasks be ran automataically? When (after any code change, like unit tests?)
This is a question.
As for coverage analysis: I run rcov before committing to make sure I'm not
checking in (lots of) untested code. This is how the task can be defined in
Rake:
require 'rcov/rcovtask'
desc "Create a cross-referenced code coverage report."
Rcov::RcovTask.new do |t|
t.libs << "ext/rcovrt"
t.test_files = FileList['test/test*.rb']
t.rcov_opts << "--callsites" # comment to disable cross-references
t.verbose = true
end
and in Rant it'd be:
require 'rcov/rant'
desc "Create a cross-referenced code coverage report."
gen Rcov do |g|
g.libs << "ext/rcovrt"
g.test_files = sys['test/test*.rb']
g.rcov_opts << "--callsites" # comment to disable cross-references
end
This way {rake,rant} rcov will generate a XHTML report and show another
on stdout.
If your commits are small enough (or should I say "atomic"?), running the
tests just before committing (say with the pre-commit hook of your VCS) might
suffice. Otherwise (larger commits, or tests that don't want to pass) autotest
would be the way to go, I guess.
Regarding profiling, I don't think it makes any sense to run that
automatically in general.
···
On Tue, May 30, 2006 at 11:27:51PM +0900, Victor Shepelev wrote:
From: Suraj N. Kurapati [mailto:skurapat@ucsc.edu]
Sent: Tuesday, May 30, 2006 5:22 PM
> Victor Shepelev wrote:
> > 4. optional code speed analysis (like benchmarking "how long it rans",
> > profiling "what rans so long") after each test
>
> See the 'profile' library provided in Ruby core.
>
> ruby -r profile
No-no 
I know, how to do it _technically_.
What I what to know is how to _organize_ all the task.
Just for now I do only unit-tests, but _when_ to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must those
tasks be ran automataically? When (after any code change, like unit tests?)
This is a question.
--
Mauricio Fernandez - http://eigenclass.org - singular Ruby
Behalf Of Mauricio Fernandez
> From: Suraj N. Kurapati [mailto:skurapat@ucsc.edu]
> Sent: Tuesday, May 30, 2006 5:22 PM
> > Victor Shepelev wrote:
> > > 4. optional code speed analysis (like benchmarking "how long it
rans",
> > > profiling "what rans so long") after each test
> >
> > See the 'profile' library provided in Ruby core.
> >
> > ruby -r profile
>
> No-no 
> I know, how to do it _technically_.
> What I what to know is how to _organize_ all the task.
> Just for now I do only unit-tests, but _when_ to run benchmarking /
> profiling / coverage analysis / dependency analysis and so on? Must
those
> tasks be ran automataically? When (after any code change, like unit
tests?)
> This is a question.
As for coverage analysis: I run rcov before committing to make sure I'm
not
checking in (lots of) untested code. This is how the task can be defined
in
Rake:
require 'rcov/rcovtask'
desc "Create a cross-referenced code coverage report."
Rcov::RcovTask.new do |t|
t.libs << "ext/rcovrt"
t.test_files = FileList['test/test*.rb']
t.rcov_opts << "--callsites" # comment to disable cross-references
t.verbose = true
end
and in Rant it'd be:
require 'rcov/rant'
desc "Create a cross-referenced code coverage report."
gen Rcov do |g|
g.libs << "ext/rcovrt"
g.test_files = sys['test/test*.rb']
g.rcov_opts << "--callsites" # comment to disable cross-references
end
This way {rake,rant} rcov will generate a XHTML report and show another
on stdout.
If your commits are small enough (or should I say "atomic"?), running the
tests just before committing (say with the pre-commit hook of your VCS)
might
suffice. Otherwise (larger commits, or tests that don't want to pass)
autotest
would be the way to go, I guess.
Regarding profiling, I don't think it makes any sense to run that
automatically in general.
Thanks Mauricio. It's just what I've want to hear.
Mauricio Fernandez - http://eigenclass.org - singular Ruby
V.
···
From: Mauricio Julio Fernandez Pradier [mailto:ferferse@telefonica.net] On
Sent: Tuesday, May 30, 2006 7:53 PM
On Tue, May 30, 2006 at 11:27:51PM +0900, Victor Shepelev wrote: