Found this site about how languages stack up against each other. This test
put Ruby far behind to others. Has this changed with 1.7? What’s the deal
with these results?
Found this site about how languages stack up against each other. This test
put Ruby far behind to others. Has this changed with 1.7? What’s the deal
with these results?
That test creates 100,000 ruby threads and uses lots of thread
primitives. Threads are pretty slow in Ruby because of the way they
are implemented (each thread switch involves a large memcpy() of the
stack).
Ruby should probably steal thread implementation ideas from Perl,
Python or Pike. Perhaps this will happen with Rite.
This is one of the ‘same way’ tests, for the same problem, I’d rather
use Queue, which judging from the comparitive speed of my machine,
brings ruby right in there around python.
Note that what this test is really testing is the speed of the
lock/unlock of the Mutex, thread switching, and the CV. Reading
Alternates, there is a Python example that uses a Queue too.
code:
#!/usr/local/bin/ruby
require ‘thread’
def main(n)
q = Queue.new # shared
consumed = produced = 0
consumer = Thread.new do
i = 0
until i == n do
i = q.pop
consumed += 1
end
end
producer = Thread.new do
1.upto(n) do |i|
q.push(i)
produced += 1
end
end
producer.join
consumer.join
puts “#{produced} #{consumed}”
end
main(Integer(ARGV.shift || 1))
end code
$ time ruby prodcon-queue.rb 100000
100000 100000
real 0m41.863s
user 0m38.061s
sys 0m0.235s
$ time ruby prodcon-bagley.rb 100000
100000 100000
Found this site about how languages stack up against each other. This test
put Ruby far behind to others. Has this changed with 1.7? What’s the deal
with these results?
Found this site about how languages stack up against each other. This
test
put Ruby far behind to others. Has this changed with 1.7? What’s the
deal
with these results?
My experience is that Ruby speed very, very much depends on the build
you’re using. I’m talking about Ruby on Windows, PragProg builds (as
somebody here once said - “yes, I’m a lazy bum that doesn’t build his own
Ruby”). I occasionally run stuff on Linux but I never measured many
different versions there, so I can’t say how much speed depends on that).
Back to Ruby on Windows, - cygwin 1.6.x builds were extremely fast, far
faster then Python (up to 20 (yes, twenty) times on some of my real world
samples), and 4 to 5 times faster then equivalent Perl scriops, where both
Perl and Python were latest ActiveState builds at the time (about a year and
a half ago), and programs were structurally identical. Then came the sad
period of native (non-cygwin) Win32 1.6.x builds, and those were much, much
slower then cygwin (about 6 to 8 times on my tests, but never as slow as
Python). Then came 1.7.x native builds where the never-buffered-read bug was
fixed, which brought 1.7.x builds to be only twice slower then 1.6.x cygwin
builds. Tests included massive directory traversals, file reading and
regexing.
For this reason I always keep at least two versions of Ruby installed -
cygwin 1.6.5 for stuff where speed matters (i.e. there’s lots of big file
reading and parsing going on), and latest 1.7.3 for stuff where speed
doesn’t mean much.
Regards,
M.
PS Please don’t flame me for figures (20x, 4x, etc.) mentioned in this post.
I am fully aware that those figures only relate to my limited samples and in
no way present actual general (whatever that would mean) performance rates
between mentioned languages. Etc. etc. etc.
That test creates 100,000 ruby threads and uses lots of thread
primitives. Threads are pretty slow in Ruby because of the way they
are implemented (each thread switch involves a large memcpy() of the
stack).
It doesn’t create 100,000 threads, only two. But I would guess that you
are correct that it’s all of the context-switching that’s slowing it down.
Ruby should probably steal thread implementation ideas from Perl,
Python or Pike. Perhaps this will happen with Rite.
Perl and Python use native (OS) threads. I think Matz has hinted in the
past that Rite will also support native threads.
No, the test creates two threads. It passes 100,000 integers from the
producer thread to the consumer thread via a shared variable ‘data’,
locking and unlocking a Mutex 200,000 times.
That test creates 100,000 ruby threads and uses lots of thread
primitives. Threads are pretty slow in Ruby because of the way they
are implemented (each thread switch involves a large memcpy() of the
stack).
Yes and this is more important than you might figure -
Ruby is fairly lightweight in starting up and in including other modules.
Because it is reasonably modest in memory consumption, it avoids stalling
all kinds of caches where Java might start trashing.
Consider a .NET application - it spends a noticable amount of time just
starting up - presumably due to initial compilation. I never was a big fan
of JIT - it’s like spending a weekend tuning your sportscar for a shopping
trip.
It’s interesting that the fastest implementations listed on that
shootout page are also non-native.
Well, yeah, but they (at least the fastest one) use coroutines, which
are very lightweight, and require (essentially) no scheduling
overhead. Basically, they explicitly pass control between eachother
at certain points.
You could implement coroutines in ruby with callcc and get much better
performance for this benchmark, I suspect.
A little searching turns up this coroutine module. I haven’t tried
it, but it looks pretty straightforward:
It’s interesting that the fastest implementations listed on that
shootout page are also non-native.
Well, yeah, but they (at least the fastest one) use coroutines, which
are very lightweight, and require (essentially) no scheduling
overhead. Basically, they explicitly pass control between eachother
at certain points.
You could implement coroutines in ruby with callcc and get much better
performance for this benchmark, I suspect.
A little searching turns up this coroutine module. I haven’t tried
it, but it looks pretty straightforward: