Tim,
Thanks for your reply on -talk. Hmm, guess I'll cc there.
I'm very interested in this issue, but I don't yet know enough
about .NET (or Mono) to fully grasp it.
Here I'm using Mono and .NET interchangeably -- if there's a
good reason not to, let me know.
None the "bridge" projects I described currently work in Mono, since they
all use (fairly small) amounts of "Managed C++" (C++ for .NET) and Mono
doesn't have a compiler for this.
However, the amount of Managed C++ used in each is small, and e.g. in my
project (rubydotnetproxy) I have implemented Mono support in about 20 lines
of code. (There are some serious bugs though, not sure if these are my fault
or Monos. I haven't released Mono support yet.)
But in general, if you avoid Managed C++ then Mono and .NET support will be
the same.
There's been talk about an IronRuby (though I wouldn't want to
call it that) to match the IronPython that recently came into
being.
However, I'm not sure exactly what this should consist of:
1. A Ruby interpreter ported to Mono?
2. A native (x86/whatever) interpreter that spits out CIL bytecodes?
3. A combo of these? An interpreter ported to Mono that outputs Mono?
4. A library that simply allows calling the CLR and such?
5. Or something else entirely?
4 seems weak.
1 does also.
2 (essentially cross-compiling) seems inconsistent.
3 tentatively seems best to me.
5 -- who knows?
Thanks for any insights...
One of the important issues is "how do (statically-typed) languages like C#
call Ruby code".
In the worst case, you have to do
foo.Call("SomeRubyMethod", 123);
This is "bad" because it doesn't look like a normal method call. You really
want to write
foo.SomeRubyMethod(123);
This requires generating a Ruby interface at compile time. The simplest
interface would look like e.g. (C# code)
class String extends RbObject {
public RbObject capitalize(params RbObject args) {
// ...
}
// ...
}
i.e. you don't need to know the number of arguments (although it can help
performance).
Obviously we can't know all interfaces at compile time, but knowing "most
things" and having to use Call("Foo") for the rest could still be usable.
I've been musing over the idea of using documentation tools (like rdoc) for
generating public interfaces. People already have to deal with issues like
"this method exists but you can't tell at compile time" in documentation.
Microsoft have said they're considering adding an opcode to do dynamic
method invocation. If they implement this, I hope that they change languages
like C# so if a program uses a type (class) that's marked as 'dynamic', and
attempts to call a method that is not declared, then the compiler will
insert a 'dynamic_call' operation instead of failing to compile.
This would be very very good for Ruby and other dynamically typed languages.
It would make them "first class citizens" on the CLR.
As for the choice of bridging the current Ruby interpreter and the CLR or
writing a compiler/interpreter for it:
Bridge advantages
- Since it's using the normal Ruby interpreter, all your Ruby code will
continue to work. You can use existing libraries, C extensions etc.
without any problems. Always up to date with current Ruby.
If you were compiling to the CLR you would either have to drop support
for current C extensions or painfully reimplement Ruby's C interface.
The latter is hard since lots of low-level things are exposed, like
implementation of Strings, Arrays etc.
On the other hand, if Ruby 2.0 is going to change the C interface
anyway...
- A lot less work to implement. It's easy to get something simple that has
a limited amount of interfacing, but is still useful.
In comparison, writing a compiler and new runtime implementation
requires a lot of work before it can be used.
Bridge disadvantages
- Have two garbage collectors and threading models. Threading is a bit of
a pain, since .NET uses system threads (and may in the future also use
user-level threads) while Ruby has user threads.
This is the same problem Ruby has interfacing to any library that uses
sytem threads.
Advantages of new compiler/interpreter
- Perception that Ruby is working "properly" on .NET.
- Don't have the "Bridge disadvantages" - have one garbage collector and
threading model.
As far as performance goes, I don't think there is a huge difference between
the two approaches. e.g. IronPython runs a bit faster than CPython in
general, and since it's a new project and lots of work is being done I'd
expect it to improve faster than CPython.
At the moment there is a belief that "bridges are slow", since they all use
..NET's Type#InvokeMember (which is horrendously slow) when Ruby calls .NET
methods. But:
- Ruby code runs at normal speed.
- .NET code runs at normal speed.
- It's only when Ruby calls .NET or vice versa that we get slowness
(method invoking and marshalling).
There are better ways of doing things than using InvokeMember, e.g.
GetMethod, which searches for a method and then lets you call it multiple
times. There's some future stuff called "Lightweight Code Generation",
which is to let you output small amounts of IL (.NET assembly) and load
them efficiently.
Performance of bridges is what I'm working on at the moment. Things like
caching method searches and reducing the amount of marshalling (e.g. passing
strings/arrays between Ruby and .NET) will improve the performance
dramatically I think.
···
In article <41196E4F.1060404@hypermetrics.com>, Hal Fulton wrote:
--
Tim Sutherland <timsuth@ihug.co.nz>
2004 SDKACM President
Software Developers' Klub - the University of Auckland ACM Student Chapter
http://www.sdkacm.com/