(There was just a big discussion on Ruby performance, or lack thereof.
Google "Performance diffrence between ifs and case", and learn A>
premature optimization is the root of all evil, and B> you can profile
each system with unit tests and Benchmark library.)
Some quick benchmarking suggests that for small numbers of items
they're about equal, but the case one slows down faster as the number
of choices increases, presumably because it has to do a comparison for
each one instead of just one hash fetch.
It would also depend on which one matched, though -- for example:
case 1
when 1 then "one"
when 2 then "two"
...
when 1000000 then "one million"
end
will match on the first comparison, so it doesn't matter that there
are a million of them.
(There was just a big discussion on Ruby performance, or lack thereof.
Google "Performance diffrence between ifs and case", and learn A>
premature optimization is the root of all evil, and B> you can profile
each system with unit tests and Benchmark library.)
In this case it's hash lookup vs. case scanning which is significantly different than if vs. case. Since hash lookups should be O(1) and scanning all the clauses of a case statement is O(n), using the hash would in theory be faster. However, if you're only doing it a couple times and there's a small number of cases, the overhead of creating the Hash might make the case statement faster. In fact, constructing the hash the first time will be at least O(n), so the hash method will almost definitely be slower if you only do the lookup once. Of course, as you said, the best way to find out is to benchmark it yourself.
(There was just a big discussion on Ruby performance, or lack thereof.
Google "Performance diffrence between ifs and case", and learn A>
premature optimization is the root of all evil, and B> you can profile
each system with unit tests and Benchmark library.)
Not all optimization is premature, and it's not the case that no
information is available to people before they start to code
something, or even just out of academic or scientific interest quite
apart from any particular coding project.
There's a long history of quite interesting discussion about
performance on this list, often involving benchmark reports (and
therefore explicitly in the knowledge of the relevance of the
benchmarking process). Those discussions can be very illuminating,
and I'd prefer not to see them routinely squashed before they have a
chance to start.
(There was just a big discussion on Ruby performance, or lack thereof.
Google "Performance diffrence between ifs and case", and learn A>
premature optimization is the root of all evil, and B> you can profile
each system with unit tests and Benchmark library.)
Not all optimization is premature, and it's not the case that no
information is available to people before they start to code
something, or even just out of academic or scientific interest quite
apart from any particular coding project.
There's a long history of quite interesting discussion about
performance on this list, often involving benchmark reports (and
therefore explicitly in the knowledge of the relevance of the
benchmarking process). Those discussions can be very illuminating,
and I'd prefer not to see them routinely squashed before they have a
chance to start.
They're only going to get more interesting in future, as we have more different implementations to play with, too...