Personally, I think languages are about the right tool for the job, and Perl, Python and Ruby have nicely proven that dynamic file-based scripting languages are more then sufficient for a huge variety of programming tasks. If you are looking for a secondary tool-language, I can't imagine images, native compilers, and REPL being so critical as to drop Ruby. What better options are there? And the points about native compilers in the responses are well put- it's a trade off, and in the Ruby world you can drop into C where you need to (not ideal, but sufficient). As Donald Knuth put it, "premature optimization is the root of all evil".
A question back to the original poster: I think that many Rubyists and Pythonists return to native compiled languages etc. only to drop them because the loss of productivity is too painful and costly. What effects do the problem domains you code in have on this tradeoff, and what solutions have you found?
Images would be nice, but the scale of most Ruby programs I write, and the methods that I write them by, don't make me lament them too much. One epiphany test driven development has produced in programmers is that tests drive the design, forcing good design early and often- e.g. the common statement that TDD isn't really about testing.
The reason for this is that unit testability requires clean interfaces, strong cohesion, loose coupling- i.e. good modular code. Constantly subjecting code to testing from the start forces continual design decisions and refactoring to maintain testability, and the modularity is easier to maintain as a result.
If your code requires 10 minutes to get to a certain point, and you can't easily capture that state with a data-only dump (e.g. Marshal or YAML), and then continue testing from that point, that would be a good motivation to refactor. However, if TDD hasn't been followed up to that point, the code may be hard to seperate due to the design debt built up by not having the code constantly challenged by small tests (my solution there is to use log4x to due a pseudo-design-by-contract, make core domain classes invariant, elimate trival singletons/globals, and then find a couple of good "cut" points to get pseudo-modularity, and then try to use TDD going forward).
(Aside: Personally, I learned more about design and design patterns via TDD and refactoring in six months then I did in 5 years of pattern reading. In my real world career, the problems and problem domains I coded tended to preclude writing the same program multiple times and personally trying various design approaches. And the nature of traditional coding practice is that the early design decisions tend to stick. So most projects get stuck with marginal designs. The nature of TDD/refactoring and there emergent design practices is that you get to see broken/fixed design elements continually in the same coding process, and the more elegant and robust end product result in a much more learning intense experience).
I guess the short summary to my long-winded point is that unit-tests and TDD somewhat obviate the need for images from a development process, and I think other responses indicated that REPL is well handled in Ruby and that the speed issue is well enough handled by using C to provide point optimization where necessary (in clear recognization that certain problems and problem domains would find this insufficient).
Regards.
Nick
TLOlczyk wrote:
···
On Thu, 01 Jul 2004 10:45:54 GMT, gabriele renzi ><surrender_it@remove.yahoo.it> wrote:
It would also be nice to have an optional core/image.
That way if I run tests where testing uses a lot of data,
I can save images when i debug later stages and
save myself some time initializing.
I'm not sure what you mean here. Do you want to freeze the interpreter
in a given state? Or do you feel the need for some serialization
format that can be retrieved at any time fast? (in this case Marshal
could fit, maybe?)
Sigh. The way things are going today, I've finally gotten to Usenet
today, but on the radio is a very good discussion of the American
Revolution. So I can't pay attention to the whole thread, but
I will discuss this part.
Let us as an example look at a program that looks at a directory of
Web pages, and traces through them. If it cannot find a link to a
certain page, then it creates a link on a page in a prescribed manner.
You've written the program and tested it on a small data set.
Now you want to test it on a large data set. It fails miserably and you try to debug.
Phase one of the program gets a list of all the files in the directory
( recursively ).
Phase two is to scan the root files for hrefs. Then scan the files in
the hrefs. Then ... untill there are no more files to scan.
Phase three is to pick some file you didn't reach and create a link to
it.
So you work on Phase one and get it to run.
Then you work on Phase two. But it takes Phase one
ten minutes to run. So each time you test a fix in phase two
it takes at least ten minutes to run.
So you save an image at the end of phase one. Each time
you test phase two, you load the image from phase one
and just test phase two. Save ten minutes on each test run.
The reply-to email address is olczyk2002@yahoo.com.
This is an address I ignore.
To reply via email, remove 2002 and change yahoo to
interaccess,
**
Thaddeus L. Olczyk, PhD
There is a difference between
*thinking* you know something,
and *knowing* you know something.