Compiling Ruby to Native Code?

Having looked at OCaml, after following a post to this group, one
thing that stands out about OCaml is it’s native compilation and raw
speed over C and C++.

What prevents Ruby from acheiving native compilation, dynamic typing?
If so, could Ruby implement a `require static’ to force strict type
checking to provide for machine level compilation.

If Ruby could compile to native, forget byte compilation. The one
thing that’s been on my Java Christmas List for almost a decade now
(7yrs) has been native compilation. Just give me a `native’ compiler,
I’ll manage the rest.

Game over, if Java or Ruby provides native compilation; it won’t make
sense to code with anything else.

//ed

“Edward Wilson” web2ed@yahoo.com wrote in message
news:5174eed0.0208051713.5c10a2d8@posting.google.com

If Ruby could compile to native, forget byte compilation. The one

Actually one of Ocaml’s strengths is its byte code compiler. The bytecode
(in an internal representation) is used to feed the native code compiler for
various platforms.
The bytecode allows for domain centric optimizations.

Ruby will not benefit as much from native compilation. The important issues
are efficient runtime libraries and inlining / reduction of function calls.

A good Ruby bytecode implementation would have an internal graph for the
bytecode, a bytecode emitter, and eventually several native code emitters
working off the same bytecode graph. The C-- project seems to progress
slowly, but www.cminusminus.org is an interesting project for native code
generation.

Mikkel

Doesn’t gcj (GCC, the GNU Compiler Collection - GNU Project) compile to native code as an
option?

···

On Tue, Aug 06, 2002 at 10:19:54AM +0900, Edward Wilson wrote:

Game over, if Java or Ruby provides native compilation; it won’t make
sense to code with anything else.


Alan Chen
Digikata LLC
http://digikata.com

What prevents Ruby from acheiving native compilation, dynamic typing?
If so, could Ruby implement a `require static’ to force strict type
checking to provide for machine level compilation.

I don’t think dynamic typing is the problem. We’ve had an Objective-C
compiler in GCC for about ten years or so, and the fact that Objective-C
is even more rabidly dynamic than Ruby hasn’t stopped the folks at NeXT
and the GCC developers from producing a native-code compiler for it.
True, it makes the compiler more complicated and slower, but that didn’t
hold them back.

If Ruby could compile to native, forget byte compilation. The one
thing that’s been on my Java Christmas List for almost a decade now
(7yrs) has been native compilation. Just give me a `native’ compiler,
I’ll manage the rest.

Byte compilation does have its uses. Given the fact that it’s a lot of
work to write a native code compiler for a given architecture, it allows
a programming language to be used anywhere the bytecode interpreter can
be built, which is usually a matter of porting a C program, while not
totally trivial, is trivial when compared to writing a full native
code compiler for a new architecture. Getting GCC to support a new
architecture for example, means the creation of a new patch to binutils
in addition to the work involved in writing the GCC backend. That’s the
logic behind the existence of the OCaml bytecode compiler; it allows the
language to be used with reasonable performance on many more platforms
than would otherwise be possible, as well as providing a means for
bootstrapping a compiler.

Game over, if Java or Ruby provides native compilation; it won’t make
sense to code with anything else.

I humbly disagree. While Java and Ruby are very good general-purpose
languages, I think that there’s still a place for more specialized
languages to solve particular problems, even if it’s just for
prototyping. I’ve found that languages like OCaml and Haskell are
excellent for prototyping problems involving complex data structures,
and give you new and better insights into how the problems might be
solved. Result: you have cleaner, more elegant code that’s easier to
understand and maintain, once you go from your prototype to the actual
design in the target language. If for nothing else but increasing
mental flexibility, other languages are useful.

···

On Tue, Aug 06, 2002 at 10:19:54AM +0900, Edward Wilson wrote:


Rafael R. Sevilla +63(2)8123151
Software Developer, Imperium Technology Inc. +63(917)4458925

web2ed@yahoo.com (Edward Wilson) writes:

What prevents Ruby from acheiving native compilation, dynamic
typing?

Not at all. There has been excellent, optimizing Common Lisp
compilers (generating code comparable to, and occasionally faster
than, the code generated by C compilers) for at least a decade. I am
sure a native-code Ruby compiler would be technically possible.

Game over, if Java or Ruby provides native compilation; it won’t make
sense to code with anything else.

For you, perhaps, but don’t pretend to speak for everybody else. As
much as I like Ruby, the language has a very long way to go before I
would prefer it over Lisp for big, complicated programs. YMMV, as
always.

···


Tord Romstad

%% > Game over, if Java or Ruby provides native compilation; it won’t make
%% > sense to code with anything else.
%%
%% I humbly disagree. While Java and Ruby are very good general-purpose
%% languages, I think that there’s still a place for more specialized
%% languages to solve particular problems, even if it’s just for
%% prototyping. I’ve found that languages like OCaml and Haskell are
%% excellent for prototyping problems involving complex data structures,
%% and give you new and better insights into how the problems might be
%% solved. Result: you have cleaner, more elegant code that’s easier to
%% understand and maintain, once you go from your prototype to the actual
%% design in the target language. If for nothing else but increasing
%% mental flexibility, other languages are useful.
%%
%% –
%% Rafael R. Sevilla +63(2)8123151
%% Software Developer, Imperium Technology Inc. +63(917)4458925
%%

I agree with Rafael. Java and Ruby are primarily OO languages,
paradigmatically speaking. While OO has clear and obvious benefits over
other paradigms for particular sets of problems, it is often the hammer that
makes every problem a nail in computer science these days.

As OO languages go, I think Ruby’s dynamic ObjectSpace adds an interesting
set of capabilities to the programmers abstraction-construction toolchest
(so to speak), particularly in terms of metaprogramming. I definitely like
it more than Java, for that reason.

However, I am a supporter of exploring other paradigms. Ocaml and Haskell,
mentioned above, are indeed interesting and quite powerful in their domains.
Another, somewhat more obscure, language that I have a certain affection for
is Mozart-Oz, which supports not one, not two, but no less than FIVE
computational paradigms, all in the same very efficient kernel language.
Like Java, Oz compiles to cross-platform bytecode. But unlike Java, it’s
network concurrency is entirely transparent to the programmer, and Oz
"fuctors" (or components) are free to use any paradigm internally that gets
the job done. Is your problem a constraints problem? Use the declarative
aspects of the kernel language, a la prolog. Is the problem more simply
modelled after human real-world abstractions? Then use OO, complete with all
the trappings thereof (inheritance, encapsulation, polymorphism, etc.). Is
your problem computationally expensive? Then use the functional subset of
the kernel language, a la Haskell. Its really an amazing feat in computer
science to have combined the paradigms as cleanly as completely as they (the
Mozart consortium folks) have.

If it interests you at all, check them out at www.mozart-oz.org.

Now for day-to-day practical problems I find that Ruby more than suffices
for just about everything, and I’m in love with its extensibility. I’m
working on some exciting packages for the Ruby community, and I want to see
this community – and the language – grow. In fact, I’d say I’m personally
committed to that goal.

But there is something to be said for the multiparadigm approach, whether
you achieve that by blending components written in different languages (and
.NET doesn’t really count here, since all .NET langauges share the same
fundamental OO semantics, via the CLR), or you use one language that
supports all of them. For some problem domains, strictly OO methodology is
not the optimal approach.

So - that’s all I have to say about that.

Back to Ruby coding… :slight_smile:

Sincerely,

Bob Calco

“Rafael ‘Dido’ Sevilla” dido@imperium.ph wrote in message news:20020807033226.GB1745@imperium.ph

I don’t think dynamic typing is the problem. We’ve had an Objective-C
compiler in GCC for about ten years or so, and the fact that Objective-C
is even more rabidly dynamic than Ruby hasn’t stopped the folks at NeXT
and the GCC developers from producing a native-code compiler for it.
True, it makes the compiler more complicated and slower, but that didn’t
hold them back.

Dynamic typing is the one and only problems !
It seems that you never did serious Objective-C programming otherwise
you would now that this is C with objects and most of the important
data structure algorithms are done with the static c types.

You will never get a compiler so efficient as Ocaml or Eiffel in
dynamic languages - both are able to do full system analysis. So they
don’t need any checks about virtual methods.

Tord Kallqvist Romstad romstad@math.uio.no wrote in message news:gqkofcfq7u1.fsf@priapos.uio.no

web2ed@yahoo.com (Edward Wilson) writes:

What prevents Ruby from acheiving native compilation, dynamic
typing?

Not at all. There has been excellent, optimizing Common Lisp
compilers (generating code comparable to, and occasionally faster
than, the code generated by C compilers) for at least a decade. I am
sure a native-code Ruby compiler would be technically possible.

I also must repeat myself here. There are only two ways to get good
lisp speed
using (the) to introduce static typing or reduce the security level of
the compiler so that you must provide always the correct type.

Never written a compiler - right ?

llothar@web.de (Lothar Scholz) writes:

Tord Kallqvist Romstad romstad@math.uio.no wrote in message news:gqkofcfq7u1.fsf@priapos.uio.no

web2ed@yahoo.com (Edward Wilson) writes:

What prevents Ruby from acheiving native compilation, dynamic
typing?

Not at all. There has been excellent, optimizing Common Lisp
compilers (generating code comparable to, and occasionally faster
than, the code generated by C compilers) for at least a decade. I am
sure a native-code Ruby compiler would be technically possible.

I also must repeat myself here. There are only two ways to get good
lisp speed
using (the) to introduce static typing or reduce the security level of
the compiler so that you must provide always the correct type.

You are right here, of course. In order to achieve C-like
performance, you will have to use type declarations in parts of your
code. However,

  1. You still have all the benefits of dynamic typing while developing
    your program. There is no need to add any type declarations before
    the program is completed and you discover that it is not fast
    enough.

  2. Even without any type declarations, compiled Lisp code performs
    rather well. It is certainly much faster than interpreted
    languages like Ruby.

My knowledge of Ruby is still rather limited, but I don’t see why
adding a native-code compiler should be more difficult than for Lisp.
Even with dynamic typing, the compiled code would probably perform
much better than interpreted Ruby. Add optional type declarations
(like in Lisp), and it should be possible to achieve the same speed as
with optimized Lisp code.

Do you disagree? If so, what properties of Ruby make it harder to
compile than Lisp?

Never written a compiler - right ?

Of course I have. I doubt that there are many Lisp programmers who
have not (at least) written a compiler for some subset of Scheme.

···


Tord Romstad

Dynamic typing is the one and only problems !
It seems that you never did serious Objective-C programming otherwise
you would now that this is C with objects and most of the important
data structure algorithms are done with the static c types.

Are you trying to say that if I do:

gcc -x objective-c -o test test.m -lobjc

the resulting executable is NOT native code? All I’ve been trying to
say is that a native code compiler does exist for Objective-C, and
dynamic typing has not stopped NeXT/GCC from making it.

You will never get a compiler so efficient as Ocaml or Eiffel in
dynamic languages - both are able to do full system analysis. So they
don’t need any checks about virtual methods.

The fact is this inefficiency exists does not change the fact that you
can make a native code compiler for the language. Unless your program
is highly method invocation-bound, the native code version will probably
run much faster than the interpreted one. And even then, it may be
possible to provide optimizations; I imagine a lot of research has gone
into optimizing dynamic method invocations, as polymorphism is worthless
without some form of dynamic typing.

···

On Thu, Aug 08, 2002 at 08:45:07AM +0900, Lothar Scholz wrote:


Rafael R. Sevilla +63(2)8123151
Software Developer, Imperium Technology Inc. +63(917)4458925

“Bob Calco” robert.calco@verizon.net wrote in message
news:NCEJJNLDMEJLEJHKNGNHEEFADFAA.robert.calco@verizon.net

If it interests you at all, check them out at www.mozart-oz.org.

I did look briefly at mozart. I don’t recall the details but I think it
either seemed abandoned (or was that a related project to oz?) or it
required a huge installation to run, preventing efficient deployment - that
would but it in the same ballgame as Erlang.

Mikkel

Bob Calco wrote:


However, I am a supporter of exploring other paradigms. Ocaml and Haskell,
mentioned above, are indeed interesting and quite powerful in their domains.
Another, somewhat more obscure, language that I have a certain affection for
is Mozart-Oz, which supports not one, not two, but no less than FIVE
computational paradigms, all in the same very efficient kernel language.
Like Java, Oz compiles to cross-platform bytecode. But unlike Java, it’s
network concurrency is entirely transparent to the programmer, and Oz
“fuctors” (or components) are free to use any paradigm internally that gets
the job done. Is your problem a constraints problem? Use the declarative
aspects of the kernel language, a la prolog. Is the problem more simply
modelled after human real-world abstractions? Then use OO, complete with all
the trappings thereof (inheritance, encapsulation, polymorphism, etc.). Is
your problem computationally expensive? Then use the functional subset of
the kernel language, a la Haskell. Its really an amazing feat in computer
science to have combined the paradigms as cleanly as completely as they (the
Mozart consortium folks) have.

If it interests you at all, check them out at www.mozart-oz.org.

Now for day-to-day practical problems I find that Ruby more than suffices
for just about everything, and I’m in love with its extensibility. I’m
working on some exciting packages for the Ruby community, and I want to see
this community – and the language – grow. In fact, I’d say I’m personally
committed to that goal.

But there is something to be said for the multiparadigm approach, whether
you achieve that by blending components written in different languages (and
.NET doesn’t really count here, since all .NET langauges share the same
fundamental OO semantics, via the CLR), or you use one language that
supports all of them. For some problem domains, strictly OO methodology is
not the optimal approach.

So - that’s all I have to say about that.

Back to Ruby coding… :slight_smile:

Sincerely,

Bob Calco

The problem that I have with many of those paradigms, is that they don’t easily support the idea of data files. Particularly of data files that may be looked at and changed in some other language. This seems endemic in the functional languages, though, of course, the object oriented languages have their problems with it also (as you know if you have ever tried to keep a relational database in sync with a object oriented program). This is why marshalling was invented, and it’s far from perfect. Only the procedural languages seem to have solved this one, and that largely by not directly representing the kind of structures that cause problems.

I assume that there actually are ways to address this in Mozart, but I wasn’t able to discover them in the time that I allotted for evaluation. Other than that it looked quite interesting, but that was such a large “but” that I never tried to implement anything. Ditto for Haskell. Eiffel was quite interesting, and I rather like it. But some unnecessary choices (e.g., no operation redefinitions with altered parameter lists) have rendered it quite inflexible. I’ve used it a bit, but you need to be extremely careful to not name operations anything reasonable, e.g. substring, because that will cause grief. Instead use a name like substring_with_unspec_length. Operator overriding is quite basic to all other computer languages. Even Fortran 77 has what Eiffel considers a forbidden amount of operator overriding (+ can be used not only on integers, but also on floats, and between integers and floats, and …). So I found Eiffel unreasonably clumsy, and for no good reason (paraphrase" There can exist situations where the meanings of different operations cannot be disabiguated by the type of their parameters, e.g., point(x, y) and point(r, theta), so we must forbid this choice."), and so in the language design the operations are totally specified by the name of the operation. (Perhaps C++ munges names to do this, but at least the programmer doesn’t need to keep track of THAT.)

This is a great pity. Most of the code that I do really is statically determinable, so it would be nice to be able to compile it to execute with the potential efficiency. Unfortunately, it usually needs to work with at least some modules that aren’t. Currently the only choice seems to be C (and probably a few C mimics … I wonder how I would link Ruby with Ada95[gnat]?).

···


– Charles Hixson
Gnu software that is free,
The best is yet to be.

> Are you trying to say that if I do: > > gcc -x objective-c -o test test.m -lobjc > > the resulting executable is *NOT* native code? All I've been trying to > say is that a native code compiler does exist for Objective-C, and > dynamic typing has not stopped NeXT/GCC from making it.

What no one seems to have mentioned about Obj-C is that: Yes, it is compiled
with gcc but will not run without the runtime package. The run time package
is required to handle the dynamism in Obj-C that is not in pure C.

···

On Thursday 08 August 2002 09:42 pm, Rafael ‘Dido’ Sevilla wrote:

“Tord Kallqvist Romstad” romstad@math.uio.no wrote in message
news:gqk1y99en66.fsf@nereids.uio.no…

llothar@web.de (Lothar Scholz) writes:

Even with dynamic typing, the compiled code would probably perform
much better than interpreted Ruby. Add optional type declarations
(like in Lisp), and it should be possible to achieve the same speed as
with optimized Lisp code.

Do you disagree? If so, what properties of Ruby make it harder to
compile than Lisp?

Ruby is a mutating monster so you have to look up functions in a hash.
You can optimize this by building a virtual table that gets invalidated when
a class is modified, but it can be tricky to know when this is worthwile.

To support efficient compilation, a modified version would be needed that
optionally allows type annotations and which disallows modifying the type of
an object the has been created (either prevent class modification, or make
it a new type internally).

Never written a compiler - right ?

Of course I have.

Obviously, who hasn’t :wink:

Mikkel

Charles:

[In RE: multiparadigm programming, some thoughts relevant to your remarks
follow…]

%% Charles Hexton wrote:

%% The problem that I have with many of those paradigms, is that
%% they don’t easily support the idea of data files. Particularly
%% of data files that may be looked at and changed in some other
%% language. This seems endemic in the functional languages,
%% though, of course, the object oriented languages have their
%% problems with it also (as you know if you have ever tried to
%% keep a relational database in sync with a object oriented
%% program). This is why marshalling was invented, and it’s far
%% from perfect. Only the procedural languages seem to have solved
%% this one, and that largely by not directly representing the kind
%% of structures that cause problems.

Well at the binary level most of the data types in all languages are
reducible to bytes, or some compound thereof; I’m not exactly sure I guess
what you mean. Oz for instance treats all serialized file system objects as
standard “pickles,” and under the hood, being developed as it is in
templatized ANSI C++, it ultimately uses data structures that are quite
compatible with C/C++ and any language that has analogous data structures.
Not sure I understand your point here…

%% I assume that there actually are ways to address this in Mozart,

Yes, the Pickle functor. There are also some contributed funtors that handle
file system abstractions rather painlessly based on Pickle. Pickled "Files"
are URL based in Oz thus easily accessed over standard internet protocols as
URLs. And since Oz abstracts network protocols to a very high level, one
needn’t worry about the specific file system on which a Pickle resides -
that’s a low level detail dealt with by the Oz emulation VM, in much the
same way Java deals with file streams at an I/O level. Except the network
abstractions aren’t as transparent in Java as they are in Oz.

%% but I wasn’t able to discover them in the time that I allotted
%% for evaluation. Other than that it looked quite interesting,
%% but that was such a large “but” that I never tried to implement
%% anything. Ditto for Haskell. Eiffel was quite interesting, and
%% I rather like it. But some unnecessary choices (e.g., no
%% operation redefinitions with altered parameter lists) have
%% rendered it quite inflexible. I’ve used it a bit, but you need
%% to be extremely careful to not name operations anything
%% reasonable, e.g. substring, because that will cause grief.
%% Instead use a name like substring_with_unspec_length. Operator
%% overriding is quite basic to all other computer languages. Even
%% Fortran 77 has what Eiffel considers a forbidden amount of
%% operator overriding (+ can be used not only on integers, but
%% also on floats, and between integers and floats, and …). So I
%% found Eiffel unreasonably clumsy, and for no good reason
%% (paraphrase" There can exist situations where the meanings of
%% different operations cannot be disabiguated by the type of their
%% parameters, e.g., point(x, y) and point(r, theta), so we must
%% forbid this choice."), and so in the language design the
%% operations are totally specified by the name of the operation.
%% (Perhaps C++ munges names to do this, but at least the
%% programmer doesn’t need to keep track of THAT.)

Well, every language has its rationale, and the semantics of each language
is usually and quite rightly influenced heavily by the problem domain or
domains that its author had in mind originally when he/she invented the
language.

At the extreme end are languages like Eiffel and Ada, which have an explicit
reason for just about every feature and non-feature imaginable. (I would
hypothesize that if you like Eiffel with its “Design by Contract” paradigm
enforced by the compiler, you probably dig (or have love-hate relationship
with) Ada too, with its brutal compile-time type checking.)

%%
%% This is a great pity. Most of the code that I do really is
%% statically determinable, so it would be nice to be able to
%% compile it to execute with the potential efficiency.

Efficiency gains of compiled vs. interpreted code are often overstated,
except in cases of heavy use of recursion, as for instance in mathematically
intense operations, i.e., involving 3D graphics. Even there, with a thin
extension interface to native graphic engine, it’s rather hard to tell
whether C/C++ or Ruby, Java, etc. are at the wheel.

That having been said, one side project I’m working on to make it easier to
experiment with different object-code generators based on Ruby syntax is a
generic Ruby parser DLL, based on the parse.y file in the standard Ruby
distro, but modified to expose the hooks into the parser in a more generic,
back-end-neutral fashion through a DLL. Once operational, it should
theoretically be possible to write any number of backends for the parser: A
java bytecode generator, a C code generator, native assembly code generator,
.NET CLR generator, you name it. Any language that can interface to stdcall
C functions in a DLL or SO should be able, moreover, to make use of it.

%% Unfortunately, it usually needs to work with at least some
%% modules that aren’t. Currently the only choice seems to be C
%% (and probably a few C mimics … I wonder how I would link Ruby
%% with Ada95[gnat]?).

Aha! I knew you liked Ada!

Well, GNAT specifically is a variant of the GCC compiler, and as such it
boasts (and delivers) complete binary interoperability with GCC C/C++. I
would guess a binary extension between Ruby and Ada would be fairly
straightforward, assuming it was really something somebody wanted badly
enough to invest some time in developing.

I’m currently working on such a binary interoperability package between Ruby
and Java, which I’m calling Arubica, using the Ruby C API and the Java
native interface. With Arubica, you can create an instance of the Ruby
interpreter and call into the Ruby ObjectSpace to call existing Ruby modules
and define new classes and modules on the fly, in Java. And vice versa: you
can create an instance of the JVM and call into a custom class loader and
create pure Java objects on the fly.

This is a very different approach than that of JRuby/Jython. The original
motivation is to enable testing of pure Java objects in pure Ruby, but the
bidirectional interoperability capability seemed worth it, while I was at
it. The design is nearly complete, but lots of coding remains, and I’m sure
more coding will result in redesign, as is only proper.

It’s currently in early alpha, sometime in the next couple months I’ll
release a version through RAA.

My point is: Where there’s a will, there’s a way. :slight_smile:

Sincerely,

Bob Calco

%% –
%% – Charles Hixson
%% Gnu software that is free,
%% The best is yet to be.

Mikkel:

Mozart isn’t abandoned, there’s just a really, really small, tightly-knit
community, most of them uber-engineer types, centered around the language.

There’s a very good book in the works on computer programming using Oz,
written by two of Oz’s contributing developers, the draft of which is
currently available in multiple formats at:

http://www.info.ucl.ac.be/people/PVR/book.html

As for the installation being “huge” I don’t know what to say. It’s bigger
than ruby but considerably smaller than Java. Big is a eye-of-the-beholder
kinda thing.

You need to have Emacs, but only if you want to use their programming
interface. It comes with TONS of documentation in HTML, PDF, etc. All of
that stripped out, it’s probably about 4-6MB, including core libraries and
the VM, contained in the 2.6MB emulator.dll.

Anyway, Ruby is indeed more compact overall, on top of having a nicer
syntax. But compared to Java from an apples-to-apples perspective, I like
Oz.

Sincerely,

Bob Calco

%% -----Original Message-----
%% From: MikkelFJ [mailto:mikkelfj-anti-spam@bigfoot.com]
%% Sent: Friday, August 09, 2002 5:44 AM
%% To: ruby-talk ML
%% Subject: Re: Compiling Ruby to Native Code?
%%
%%
%%
%% “Bob Calco” robert.calco@verizon.net wrote in message
%% news:NCEJJNLDMEJLEJHKNGNHEEFADFAA.robert.calco@verizon.net
%%
%% > If it interests you at all, check them out at www.mozart-oz.org.
%%
%% I did look briefly at mozart. I don’t recall the details but I think it
%% either seemed abandoned (or was that a related project to oz?) or it
%% required a huge installation to run, preventing efficient
%% deployment - that
%% would but it in the same ballgame as Erlang.
%%
%% Mikkel
%%
%%
%%
%%

Well, who wants to use a language without its standard runtime package ? I
don’t see many C-based projects not relying heavily on the “/lib/libc.so”
runtime. Else, most C++ programmers rely on the “/usr/lib/libc++.so”
runtime. Those are languages in which the runtime is not mandatory. Now,
assuming you’re not programming a very small PIC, what difference does it
make to be forced to use the runtime or not? You would anyway.

···

On Fri, 9 Aug 2002, Albert Wagner wrote:

On Thursday 08 August 2002 09:42 pm, Rafael ‘Dido’ Sevilla wrote:
What no one seems to have mentioned about Obj-C is that: Yes, it is
compiled with gcc but will not run without the runtime package. The
run time package is required to handle the dynamism in Obj-C that is
not in pure C.


Mathieu Bouchard http://artengine.ca/matju

Charles:

[In RE: multiparadigm programming, some thoughts relevant to your remarks
follow…]

%% Charles Hexton wrote:

%% The problem that I have with many of those paradigms, is that
%% they don’t easily support the idea of data files. Particularly
%% of data files that may be looked at and changed in some other
%% language. …
Well at the binary level most of the data types in all languages are
reducible to bytes, or some compound thereof; I’m not exactly sure I guess
what you mean. Oz for instance treats all serialized file system objects as
standard “pickles,” and under the hood, being developed as it is in
templatized ANSI C++, it ultimately uses data structures that are quite
compatible with C/C++ and any language that has analogous data structures.
Not sure I understand your point here…

I have some things I want to do (quite a large number, actually) that can be
completely specified at compile time. Others where I at least know the type
and length of the parameters. And others where things are pretty open, where
I can be reading a piece of data, and be expecting the data to tell me itself
whether it’s a number, or a word definition, or a picture, or… When it
gets in, I want it to be living in an object of the appropriate type. So far
the languages that I can see how to do this in are C, Python, and Ruby. And
in C it’s too clumsy for words. But I was taught that when you can specify
things at compile time, then you should do so, because it facilitates both
efficiency and error catching (and correction).

%% I assume that there actually are ways to address this in Mozart,

Yes, the Pickle functor. There are also some contributed funtors that
handle file system abstractions rather painlessly based on Pickle. Pickled
“Files” are URL based in Oz thus easily accessed over standard internet
protocols as URLs. And since Oz abstracts network protocols to a very high
level, one needn’t worry about the specific file system on which a Pickle
resides - that’s a low level detail dealt with by the Oz emulation VM, in
much the same way Java deals with file streams at an I/O level. Except the
network abstractions aren’t as transparent in Java as they are in Oz.

It’s been awhile since I looked at OZ, perhaps a year, so the details are
fuzzy. I know that I ran into the “Pickle functor”, but I also know that I
couldn’t quickly figure out how to use it in the way that I wanted. (Perhaps
it has trouble with random IO? A part of what I want to do is build an ISAM
file with accompanying indicies.)

Well, every language has its rationale, and the semantics of each language
is usually and quite rightly influenced heavily by the problem domain or
domains that its author had in mind originally when he/she invented the
language.

That’s certainly the case (though I would argue that some of them, e.g.,
Basic, are poorly designed even for their intended areas of use).

At the extreme end are languages like Eiffel and Ada, which have an
explicit reason for just about every feature and non-feature imaginable. (I
would hypothesize that if you like Eiffel with its “Design by Contract”
paradigm enforced by the compiler, you probably dig (or have love-hate
relationship with) Ada too, with its brutal compile-time type checking.)

Right you are. I really like Ada, but it’s basically impossible to do
anything flexible with it.


Efficiency gains of compiled vs. interpreted code are often overstated,

I had thought that, and then I encountered Java and Visual Basic. That
renewed my belief.

That having been said, one side project I’m working on to make it easier to
experiment with different object-code generators based on Ruby syntax is a
generic Ruby parser DLL, based on the parse.y file in the standard Ruby
distro, but modified to expose the hooks into the parser in a more generic,
back-end-neutral fashion through a DLL. Once operational, it should
theoretically be possible to write any number of backends for the parser: A
java bytecode generator, a C code generator, native assembly code
generator, .NET CLR generator, you name it. Any language that can interface
to stdcall C functions in a DLL or SO should be able, moreover, to make use
of it.
Sort of like the inverse of “Languages for the JavaVM”?

%% Unfortunately, it usually needs to work with at least some
%% modules that aren’t. Currently the only choice seems to be C
%% (and probably a few C mimics … I wonder how I would link Ruby
%% with Ada95[gnat]?).

Aha! I knew you liked Ada!

You were totally right. I not only love it, I also hate it. But at least it
doesn’t have those $#%#^%& pointer casts that C and C++ have.

Well, GNAT specifically is a variant of the GCC compiler, and as such it
boasts (and delivers) complete binary interoperability with GCC C/C++. I
would guess a binary extension between Ruby and Ada would be fairly
straightforward, assuming it was really something somebody wanted badly
enough to invest some time in developing.

That’s why I was hoping that it might be reasonably simple. Perhaps one could
just pretend that the called module was a C module. Or at worst have a
simple C module that just acted as a delegator. But I wouldn’t be the one
who wrote such a thing. Still, i understand that soon after gcc3.0 is
released, that gnat is expected to be rolled into the compiler collection.

I’m currently working on such a binary interoperability package between
Ruby and Java, which I’m calling Arubica, using the Ruby C API and the Java
native interface. With Arubica, you can create an instance of the Ruby
interpreter and call into the Ruby ObjectSpace to call existing Ruby
modules and define new classes and modules on the fly, in Java. And vice
versa: you can create an instance of the JVM and call into a custom class
loader and create pure Java objects on the fly.

That sounds interesting, but I’m having trouble visualizing what is going on.
Still, it might make a lot of Java GUI builder output available at not too
much additional cost.


My point is: Where there’s a will, there’s a way. :slight_smile:

The problem is, which is the best one, since there are usually dozens.

···

On Monday 12 August 2002 18:58, you wrote:

Sincerely,
Bob Calco

I can link statically against libc and libstdc++. I would be surprised
if it is not possible to do the same with objc.

Paul

···

On Sat, Aug 10, 2002 at 12:45:09AM +0900, Mathieu Bouchard wrote:

Well, who wants to use a language without its standard runtime package ? I
don’t see many C-based projects not relying heavily on the “/lib/libc.so”
runtime. Else, most C++ programmers rely on the “/usr/lib/libc++.so”
runtime. Those are languages in which the runtime is not mandatory. Now,
assuming you’re not programming a very small PIC, what difference does it
make to be forced to use the runtime or not? You would anyway.

“Charles Hixson” charleshixsn@earthlink.net wrote in message
news:200208171345.24287.charleshixsn@earthlink.net

I have some things I want to do (quite a large number, actually) that can
be
completely specified at compile time. Others where I at least know the
type
and length of the parameters. And others where things are pretty open,
where

The main difficulty is that Ruby’s classes are not fixed at compiletime.
This is much more difficult to deal efficiently with than the lack of types.
You could do some typeinference and implement an efficient solution with
fallback to generic types. But the method lookups in a dynamic world is more
tricky. It can be done but you eventually get so much overhead that bytecode
is almost as efficient.

You need some kind of Class.freeze method to indicate you do not intend to
change it anymore for efficient compilation.

Another more problematic issue is that C extension modules has full access
so you can’t do any safe compile time analysis. You could make assumptions
and throw exceptions on “badly” behaved extensions.

A safer solution is an alternative C interface that only calls simple
functions and defines the classes in Ruby with "def external(“my C
function”) (x, y) is also a possibility. (callback also needed, but no
reflection).
This is the kind interface that OCaml has, and does allow external C
functions to become class members.

Mikkel