This study skipped something that seems rather important to me if he’s
measuring compression; the ratios between compressed and uncompressed files.
These would tell by the standards of whatever compression algorithm used, which
implementations contain the most information per size. (that is, compression
reduces the amount of redundant information). Here are the ratios:
Of course bzip2 isn’t an ideal compression algorithm etc etc so this sort of
measure should be taken with a grain of salt.
Another thing I noticed about this was the rather large size of the sml
versions relative to others - this surprises me somewhat since any typical sml
code that I’ve ever written is more compact than the java version of the same
code, and I’m a much better java programmer (I had a class where I wrote
equivalent versions of a bunch of stuff in both lagnauges). Perhaps this
reflects that all the apps used in this test were quite simple? This complaint
probably affects some of the other languages there as well.
On Wed, Jul 31, 2002 at 11:22:28PM +0900, email@example.com wrote:
Sorry if this has been posted already and I missed it.
I thought it was interesting.
Source code size can be compared in different ways.
in terms of Lines-of-code (LOC), in bytes,
or after compression which gives a rough estimate
of complexity because it filters out things like the
length of keywords. Interestingly, while some languages
show very different ranks before after compression
Ruby code is among the very shortest, both before and after
I used to know the answer to that one, before I ate so many preservatives…