Ideas on "Why Living Dangerous can be A Good Thing" in Ruby?

I agree with this and actually am not one who has concerns at all
about ruby's security or lack thereof. I am a major proponent of the
openness.

However, it's not random modification but the modification of library
A by library B (or the modification of something in the standard
library by Library C which could effect A, B, and C) that could be a
real concern.

But this is why rubygems allows you to run the unit tests for a
library BEFORE you install it, and if you've inspected these unit
tests, optionally adding in the potential edge cases you might expect
to encounter... this would solve a lot of issues.

So that's my suggestion as one potential 'solution' to the proposed 'problem'.
However, the issue is real and deserved to be addressed. So is it the
consensus of the community that our tests are the core of our safety
net? Or do we have / need extra precautions?

Would it be nice to have a sort of direct access to the standard
library and core units, as well as the units of all installed
libraries, so that these are called on install time or run time or
whenever you need it? Would it be nice to know that library A broke
something in B that library C needed? Or is the practical occurance
of this small enough due to responsible coding and a strong tendency
towards comprehensive testing that we needn't worry about it.

It's this i'm not fully sure of, neither in my own convictions nor in
what the community tends to believe. Getting this out in the open
will be a good thing.

I'm a firm believer in Ruby's open design, I'm a strong proponent of
dynamic code, strongly tested but agile software, and pretty much all
things that are common in Ruby. I'd love to have a definite answer
when the issue of security is brought up though.

···

On 1/8/06, gwtmp01@mac.com <gwtmp01@mac.com> wrote:

It isn't like the code is *randomly* modifying code (like certain
languages that allow pointers to any place in memory!). The
modifications are explicit and purposeful and so I think should
be able to be tested as well as any other code.

Honestly, I don't see this issue either. It's just a common point the
static people tend to make. Though it is understandable that the fact
that the behavior of your software can change quite drastically once
it's out of your hands in Ruby moreso than some other languages, this
boils down to good practice and responsibility. If you're opening up
a class and adding / removing some code, it's your responsibility to
ensure it doesn't break things.

If you're a library maintainer (like many of us are), it's your job to
identify and test your edge cases. I don't see this being any
different than anything else.

···

On 1/8/06, gwtmp01@mac.com <gwtmp01@mac.com> wrote:

> and the ability to make a set of software behave
> in irratic ways by modifying it's internals via metaprogramming and
> the like.
>

I don't buy this in the sense that I don't see how this could be
a concern for a dynamic language and not for a static language.
You are still writing code that has to be tested. Whether it is
hard to understand meta-programming or hard to understand data
structures that simulate meta-programming. It is still an issue
of software correctness and I don't see how static vs. dynamic
changes that issue in any significant way.

Hi Edward,

I like being protected from myself. I'm an excellent analyst, designer, and
yes, coder. But I'm somewhat careless. I make mistakes. That's why I'm glad I
have a cover on my circular saw. That's why I'm glad my electric stove has
lights saying which burners are still hot. And that's why I enjoy programming
in languages like Ruby, Python and Java, that protect me from myself.

I spent 10 years coding in C, and every time I had to track down intermittents
that usually turned out to be one uninitialized variable, or a picket fence
condition where I went off the end of the allocated array by one, or forgot
to free a variable or freed it twice, and that kind of thing.

I know a lot of people who never make mistakes. They can code C all day long
and never get a segfault. They needn't be protected from themselves. But not
all excellent programmers are like them.

SteveT

Steve Litt

slitt@troubleshooters.com

···

On Sunday 08 January 2006 12:32 pm, James Edward Gray II wrote:

On Jan 8, 2006, at 12:14 AM, Gregory Brown wrote:
> Actually, this is not the issue at hand. This really *does* boil down
> to language design in this case. With Ruby's openness and
> meta-programming, even well tested programs can be modified and
> redefined dynamically.
>
> This of course, has many benefits, but the bottom line is that Java
> was built with a security model to prevent things like this, while
> ruby was built to be open from the ground up to facilitate this.

Sentences like the above always read to me as: "Java was designed to
protect the programmer from doing programmer things." Always sounds
funny to me.

Eivind Eklund wrote:

> I mean, my general advice when it comes to ruby when asked about
> security is that I basically respond, "There is none, but it's not as
> bad as you'd expect. Write proper test suites, code responsibly, and
> make sure you nail down those edge cases. Continuous integration is a
> must, and idiomatic code with proper style will help make the API
> less likely to cause damage (such as the use of ! and other
> indicators).

There's documentation for how to do good APIs here:
http://rpa-base.rubyforge.org/wiki/wiki.cgi?GoodAPIDesign

I hope that's useful; feel free to edit and add/change/add discussion.

> However, to the outsider, this is only an explanation of "how" to
> overcome the apparent "flaw". I'd like to do as good a job I can of
> explaining why it isn't a flaw, when practiced correctly.

Let's look at it as a cost/benefit analysis. The cost of declaring
variables and types end up as roughly half the code size. That's
twice the amount to write, and, more importantly twice the amount to
read, twice the amount of places to change when refactoring, etc. It
also means that there's a lot of things we can't do, because we are
"protected" from it.

At this cost, the type and variable declarations had better give us a
lot. In practice, I find that they give me very little, bug wise:
Maybe 5% of my simplest bugs are detected by them. The advantages I
get are in the speed of the compiled code, and as documentation.
However, these benefits are too small to be worthwhile for the size
projects I presently do (one and two person projects).

Does a language with type-inference require as many type and variable
declarations as a language without type-inference?

Is the C++ type system the same as the SML type system?

Will programmers who passively suffer compiler type-checking detect as
many bugs as programmers who actively use 'type-full' programming as a
checking-tool?

···

On 1/8/06, Gregory Brown <gregory.t.brown@gmail.com> wrote:

I've implemented a system for doing run time checks of type
declarations, it's available from RPA (as types) and from
Index of /~eivind/ruby/types/ This allows very
flexible type checks for Ruby programs, adding whatever amount of type
discipline you want. In practice, I found this to just get in the way
- it detected very few bugs, and added more stuff to change when I did
refactoring.

Eivind.

This is what I'm trying to do:

I am not trying to prove that Ruby's openness is right for everything
and everyone, but rather show that it is right and does work for
*Ruby*.

This is something that a lot of people are critical of, and I'd like
to try to explain it without terribly too much bias.

···

On 1/9/06, Phrogz <gavin@refinery.com> wrote:

Instead, convince them that use cases are more secure than the false
sense of security syntax- and static-checking provide. Acknowledge that
there ARE some downsides to the openness, but that they are outweighed
by the freedom provided.

This thread is not about a language war, but touches on issues at the
fringes of one. Remember that no one language is the Right language for
all cases. There may be cases where the openness of Ruby makes it the
wrong choice.

One of the things I really liked about Bertrand Meyer's _Object Oriented
Software Engineering_ was his 'programming by contract' point of view.
I've found that concept to be useful in almost any program I've written
after reading that book regardless of the language I've used.

One of the most useful ideas is the notion that if client code
refuses to adhere to the pre-conditions then all bets are off and the
library code has no responsibility to ensure correct behavior.

To a certain extent I think the concerns that static people have about
Ruby seem to be concerns about what happens if they violate the contract
by not ensuring the pre-conditions *at run-time*. The response should be:
Why is that my problem? The water is under the bridge at that point. Game over.

More important is answering:
  How can I discover the pre-conditions associated with Ruby code
  How can I avoid writing code that breaks the pre-conditions?
  How can I determine the post-conditions?

So if there are valid criticism of the Ruby approach I think it could be said
that often times the pre-conditions (and post-conditions) are not documented
very well (especially with regard to meta-programming).

Documentation can help but I think there is room for language features in this
area also (e.g. structured annotations). Namespaces will also
help to localize effects and provide a framework for describing those effects.

Quick Question: What core classes are modified by ActiveRecord or Og?
Quick Answer: Lots but good luck finding a concise description.

Gary Wright

···

On Jan 8, 2006, at 2:40 AM, Gregory Brown wrote:

Though it is understandable that the fact
that the behavior of your software can change quite drastically once
it's out of your hands in Ruby moreso than some other languages, this
boils down to good practice and responsibility. If you're opening up
a class and adding / removing some code, it's your responsibility to
ensure it doesn't break things.

It's a blessing and a curse, like most things. Infinite power, infinite responsibility, and all that rot.

It came up here recently, in fact. Until Ruby 1.8.3, Logger didn't allow users to specify the output format. Because of that, some people hacked into the class and replaced the proper guts to make it work. It was nice to be able to do this, but of course the Logger API was updated and some projects, including Rails, broke. Rails has since been patched to work around this, of course.

Here's a thought though... Let's forget for a moment *how* it happened and just look at *what* happened. Short story: An upgrade introduced an incompatibility bug with some dependent software. That happens with tons of projects in every language and is only even of note here because of *how* the incompatibility was introduced.

James Edward Gray II

···

On Jan 8, 2006, at 1:37 AM, Gregory Brown wrote:

However, it's not random modification but the modification of library
A by library B (or the modification of something in the standard
library by Library C which could effect A, B, and C) that could be a
real concern.

Sentences like the above always read to me as: "Java was designed to
protect the programmer from doing programmer things." Always sounds
funny to me.

Hi Edward,

James actually. :wink:

And that's why I enjoy programming in languages like Ruby, Python and Java, that protect me from myself.

Wow, that's a might unusual alliance of languages. I wonder how most Java programmers would feel about being lumped in with Python and Ruby on safety...

James Edward Gray II

···

On Jan 8, 2006, at 3:42 PM, Steve Litt wrote:

On Sunday 08 January 2006 12:32 pm, James Edward Gray II wrote:

Java: Yes. (in the average scenario)
Python: Pretty much kinda maybe.
Ruby: You'll shoot your eye out! :wink:

···

On 1/8/06, Steve Litt <slitt@earthlink.net> wrote:

That's why I'm glad my electric stove has
lights saying which burners are still hot. And that's why I enjoy programming
in languages like Ruby, Python and Java, that protect me from myself.

Eivind Eklund wrote:
> At this cost, the type and variable declarations had better give us a
> lot. In practice, I find that they give me very little, bug wise:
> Maybe 5% of my simplest bugs are detected by them. The advantages I
> get are in the speed of the compiled code, and as documentation.
> However, these benefits are too small to be worthwhile for the size
> projects I presently do (one and two person projects).

Does a language with type-inference require as many type and variable
declarations as a language without type-inference?

Does a reference to declaring types and variables make it clear to you
that we're talking about a language where you have to declare types
and variables, and *not* one based on type inference?

Is the C++ type system the same as the SML type system?

Does the SML type system include declaring types and variables to the
degree of taking up roughly half the code?

Will programmers who passively suffer compiler type-checking detect as
many bugs as programmers who actively use 'type-full' programming as a
checking-tool?

Will people that use the annoying technique of asking bad rethorical
questions get on your nerves too?

:wink:

Really, and in all friendliness, the question isn't if they'll detect
as many bugs. The question is if those that tries to use types as a
checking tool find so many bugs and/or get so many other benefits that
the costs introduced by the types are worth it. The answer will vary
by language, by environment, by programmer, and by methodology used.
I believe the static type system of SML or Haskell may well be worth
it - I've just not worked enough with it to know.

For me, in the environment I work in, with the ways I work with Ruby,
I've found that very few of my bugs would be caught by extra type
checking. I just don't end up with wrongly typed data much at all,
and in the few cases where I do, the errors have come up immediately.
Since I wrote my type checking library, I've not had a subtle bug that
would be caught by more type checking. Before I wrote the type
checking library, I thought that adding more type checking would catch
a significant amount of bugs. When I added more type checking and
found that it got in the way, I started looking for cases where it
would have helped. I found very few, and they've so far shown up
immediately (as methods missing).

Eivind.

···

On 1/9/06, Isaac Gouy <igouy@yahoo.com> wrote:

> > This of course, has many benefits, but the bottom line is that
> > Java was built with a security model to prevent things like
> > this, while ruby was built to be open from the ground up to
> > facilitate this.
>
> Prevent what? One can build twisty, loopy, self-modifying code in
> Java, too. It's just painful; maybe that's part of The Plan.

Though that's funny, I really think it was part of the plan for
Java. They made no attempt to make doing it convenient or useful
(though that can be said for a lot of Java things), which is part of
the way they can discourage developers from being 'wild and crazy'

....which is all part of the appealing-to-management[0] concept of
programmers as being generic-and-replaceable cogs in the project
development wheel.

If programmers have too much (ie. any) ability to be "wild and crazy"
(== creative), that could be considered as dangerous to project
"integrity".

Of course, the sad thing is that there's some truth in that. And much
like knowledge, a little truth (especially when taken out of context)
is a dangerous thing. :slight_smile:

> There is no inherent security from code that is too clever for its
> own good.

That's true. We are really addressing the illusion of security. Or
at least a superficial level of security. I think a lot of people
are just scared by how damn convenient and common such practices are
in Ruby, even if their language is capable of doing similar things.

I think it's part of the old no-such-thing-as-a-free-lunch notion.
Some people (and they're not necessarily stupid people) have trouble
with a concept like "Ruby is just a better and more powerful language
than Java". They presume that there _has_ to be a cost for that extra
power.

And the idea that you pay for extra power by losing "safety" (whatever
"safety" means in this context) is a seductive one, because it has so
many physical-world parallels. Though it's dreadfully simplistic at
best (and just plain wrong at worst).

So... just about any management will read "increasing power" as
"losing safety" which translates to "increasing _risk_" and
INCREASING RISK IS BAD so no Ruby/Python/Smalltalk/Lisp for you,
heathen. Get back to being an indistinguishable cog in the low-risk,
industry-best-practice[1] Java machine!

Ahem. Not that I'm venting or anything. :slight_smile:

Pete.

[0] I know, I know. Not _all_ managment. But definitely some.

[1] Where industry-best-practice => what-everyone-else-is-doing
    => if-everyone-else-is-doing-it-I-can't-be-blamed-if-it-fails-,
    because-I-didn't-choose,-the-_industry_-did. :slight_smile:

···

Gregory Brown <gregory.t.brown@gmail.com> wrote:

On 1/8/06, James Britt <james_b@neurogami.com> wrote:

--
http://flooble.net/blog
"Remember, SCSI is not black magic. There are fundamental technical
reasons why it is necessary to sacrifice a goat at midnight in order
to get a SCSI device working properly." -- Arnoud Engelfriet

This is an excellent point, in my opinion. If I write code like:

   not_my_object.instance_eval { @secret_value += 1 }

I know I'm crossing the line. When it eventually breaks, I'll know who's fault it is. For now though, I'm telling Ruby, "Just trust me; I know what I'm doing here." The great part is that she believes me and does it. :wink:

James Edward Gray II

···

On Jan 8, 2006, at 9:00 AM, gwtmp01@mac.com wrote:

One of the things I really liked about Bertrand Meyer's _Object Oriented
Software Engineering_ was his 'programming by contract' point of view.
I've found that concept to be useful in almost any program I've written
after reading that book regardless of the language I've used.

One of the most useful ideas is the notion that if client code
refuses to adhere to the pre-conditions then all bets are off and the
library code has no responsibility to ensure correct behavior.

Hi James,

Compare them to C :-). Or Perl.

Steve Litt

slitt@troubleshooters.com

···

On Sunday 08 January 2006 04:52 pm, James Edward Gray II wrote:

On Jan 8, 2006, at 3:42 PM, Steve Litt wrote:
> On Sunday 08 January 2006 12:32 pm, James Edward Gray II wrote:
>> Sentences like the above always read to me as: "Java was designed to
>> protect the programmer from doing programmer things." Always sounds
>> funny to me.
>
> Hi Edward,

James actually. :wink:

> And that's why I enjoy programming in languages like Ruby, Python
> and Java, that protect me from myself.

Wow, that's a might unusual alliance of languages. I wonder how most
Java programmers would feel about being lumped in with Python and
Ruby on safety...

Surprised, at least.

···

On Mon, Jan 09, 2006 at 06:52:20AM +0900, James Edward Gray II wrote:

On Jan 8, 2006, at 3:42 PM, Steve Litt wrote:

>And that's why I enjoy programming in languages like Ruby, Python
>and Java, that protect me from myself.

Wow, that's a might unusual alliance of languages. I wonder how most
Java programmers would feel about being lumped in with Python and
Ruby on safety...

--
Chad Perrin [ CCD CopyWrite | http://ccd.apotheon.org ]

This sig for rent: a Signify v1.14 production from http://www.debian.org/

Everyone says that, but I don't see it (til it hits me in the eye? :-).

Maybe I'm just used to C, where the slightest mistake leads to a subtle bug
that happens every couple weeks.

Sure, if I went out of my way I could make Ruby do corruptable things, whereas
with C I cannot avoid it.

Ruby has beautiful encapsulation. Yeah it could be defeated, but you'd really
have to try. I want my language to protect me from my own mistakes, not from
my own death wish.

Personally, I'd NEVER gratuitously add a method or instance var to a class at
runtime. If I needed more methods than the class provided, I'd subclass it. I
mean, how hard is it to subclass something, especially in Ruby. Adding
methods and instance variables to classes in real time reminds me of senators
who add a social security amendment to a defense bill -- it's just bad
business that can lead to no good.

If someone can show me an advantage to adding methods and instance variables
in real time, and that advantage can't be realized with normal OOP
techniques, I'll keep an open mind. But unless it offers me a unique benefit
that I need, I wouldn't do it.

It's easy to use Ruby in a manner that respects encapsulation and a known
state, and if used that way, I'll leave my goggles at home :slight_smile:

SteveT

Steve Litt

slitt@troubleshooters.com

···

On Sunday 08 January 2006 05:04 pm, Gregory Brown wrote:

On 1/8/06, Steve Litt <slitt@earthlink.net> wrote:
> That's why I'm glad my electric stove has
> lights saying which burners are still hot. And that's why I enjoy
> programming in languages like Ruby, Python and Java, that protect me from
> myself.

Java: Yes. (in the average scenario)
Python: Pretty much kinda maybe.
Ruby: You'll shoot your eye out! :wink:

Eivind Eklund wrote:

> Eivind Eklund wrote:
> > At this cost, the type and variable declarations had better give us a
> > lot. In practice, I find that they give me very little, bug wise:
> > Maybe 5% of my simplest bugs are detected by them. The advantages I
> > get are in the speed of the compiled code, and as documentation.
> > However, these benefits are too small to be worthwhile for the size
> > projects I presently do (one and two person projects).
>
> Does a language with type-inference require as many type and variable
> declarations as a language without type-inference?

Does a reference to declaring types and variables make it clear to you
that we're talking about a language where you have to declare types
and variables, and *not* one based on type inference?

No

> Is the C++ type system the same as the SML type system?

Does the SML type system include declaring types and variables to the
degree of taking up roughly half the code?

Depends how you write it

> Will programmers who passively suffer compiler type-checking detect as
> many bugs as programmers who actively use 'type-full' programming as a
> checking-tool?

Will people that use the annoying technique of asking bad rethorical
questions get on your nerves too?

:wink:

Not as much as those who make wild generalizations :wink:

···

On 1/9/06, Isaac Gouy <igouy@yahoo.com> wrote:

Really, and in all friendliness, the question isn't if they'll detect
as many bugs. The question is if those that tries to use types as a
checking tool find so many bugs and/or get so many other benefits that
the costs introduced by the types are worth it. The answer will vary
by language, by environment, by programmer, and by methodology used.
I believe the static type system of SML or Haskell may well be worth
it - I've just not worked enough with it to know.

For me, in the environment I work in, with the ways I work with Ruby,
I've found that very few of my bugs would be caught by extra type
checking. I just don't end up with wrongly typed data much at all,
and in the few cases where I do, the errors have come up immediately.
Since I wrote my type checking library, I've not had a subtle bug that
would be caught by more type checking. Before I wrote the type
checking library, I thought that adding more type checking would catch
a significant amount of bugs. When I added more type checking and
found that it got in the way, I started looking for cases where it
would have helped. I found very few, and they've so far shown up
immediately (as methods missing).

Eivind.

If you like, you can use the method_added callback to see when a class gets
updated.

For example, putting the following code before your unit tests will give you
an indication of where methods are added. Of course it would probably be
best to require a file with this code.
Of course this doesn't cover everything (like methods being added via eval
or changing instance variables), but it seems with better analysis tools and
libraries, this hole can be covered.
See
http://weblog.freeopinion.org/articles/2006/01/06/find-out-where-a-method-was-added-in-rubyfor
more details.

# A class like this can be in a library somewhere. This is a rough
draft implementation.
class MethodDefinitionContainer
class << self
    def instance
        @instance = Hash.new unless defined? @instance
        @instance
    end

    include Enumerable
    def (key)
        key = key.to_sym
        instance[key]
    end

    def =(key, value)
        key = key.to_sym
        instance[key] = if instance[key].nil?
        instance[key] << value
    end

    def each
        instance.each do |key, value|
            yield(key, value)
        end
    end
end
end

# End library code

class Object
    def self.method_added(id)
        MethodDefinitionContainer[self.to_s + '.' + id.to_s] = caller[0]
    end
end

···

####
# Unit tests are run here
####

sorted_keys = MethodDefinitionContainer.instance.keys.sort

sorted_keys.each do |key|
    puts key
    values = MethodDefinitionContainer[key]
    values.each do |value|
        puts "\t" + value
    end
end

On 1/8/06, gwtmp01@mac.com <gwtmp01@mac.com> wrote:

On Jan 8, 2006, at 2:40 AM, Gregory Brown wrote:
> Though it is understandable that the fact
> that the behavior of your software can change quite drastically once
> it's out of your hands in Ruby moreso than some other languages, this
> boils down to good practice and responsibility. If you're opening up
> a class and adding / removing some code, it's your responsibility to
> ensure it doesn't break things.

One of the things I really liked about Bertrand Meyer's _Object Oriented
Software Engineering_ was his 'programming by contract' point of view.
I've found that concept to be useful in almost any program I've written
after reading that book regardless of the language I've used.

One of the most useful ideas is the notion that if client code
refuses to adhere to the pre-conditions then all bets are off and the
library code has no responsibility to ensure correct behavior.

To a certain extent I think the concerns that static people have about
Ruby seem to be concerns about what happens if they violate the contract
by not ensuring the pre-conditions *at run-time*. The response
should be:
Why is that my problem? The water is under the bridge at that point.
Game over.

More important is answering:
        How can I discover the pre-conditions associated with Ruby code
        How can I avoid writing code that breaks the pre-conditions?
        How can I determine the post-conditions?

So if there are valid criticism of the Ruby approach I think it could
be said
that often times the pre-conditions (and post-conditions) are not
documented
very well (especially with regard to meta-programming).

Documentation can help but I think there is room for language
features in this
area also (e.g. structured annotations). Namespaces will also
help to localize effects and provide a framework for describing those
effects.

Quick Question: What core classes are modified by ActiveRecord or Og?
Quick Answer: Lots but good luck finding a concise description.

Gary Wright

--
Brian Takita
http://weblog.freeopinion.org

This is definitely a point worth making, that *most* people know that
meta-programming is dangerous stuff, so their either extremely careful
or their pretty open about letting you know that their efforts may
well have unsavory effects.

The key point to be made, I believe, is that ruby puts all the
power... and the responsibility, in the programmers hands. Which in
my opinion, is where it should be anyway.

···

On 1/8/06, James Edward Gray II <james@grayproductions.net> wrote:

On Jan 8, 2006, at 9:00 AM, gwtmp01@mac.com wrote:

> One of the things I really liked about Bertrand Meyer's _Object
> Oriented
> Software Engineering_ was his 'programming by contract' point of view.
> I've found that concept to be useful in almost any program I've
> written
> after reading that book regardless of the language I've used.
>
> One of the most useful ideas is the notion that if client code
> refuses to adhere to the pre-conditions then all bets are off and the
> library code has no responsibility to ensure correct behavior.

This is an excellent point, in my opinion. If I write code like:

   not_my_object.instance_eval { @secret_value += 1 }

I know I'm crossing the line. When it eventually breaks, I'll know
who's fault it is. For now though, I'm telling Ruby, "Just trust me;
I know what I'm doing here." The great part is that she believes me
and does it. :wink:

Let's think of examples where this certainly is true and results in
good things. The first one that came to my head, and the example I
used in class was automated memoization. I'd like to hear some of the
greater hacks and accomplishments people have seen/done using the
openness and dynamicity of ruby as a feature, instead of a risk.
(There are plenty of them out there).

Does anyone have a good example of a domain specific language that
heavily uses metaprogramming to make it 'work' ?

···

On 1/8/06, James Edward Gray II <james@grayproductions.net> wrote:

I know I'm crossing the line. When it eventually breaks, I'll know
who's fault it is. For now though, I'm telling Ruby, "Just trust me;
I know what I'm doing here." The great part is that she believes me
and does it. :wink:

I'm really not sure at all on this but could well written C or Perl be
safer than Ruby?
I'm thinking the answer to that is 'yes', but I'm not sure at all why
(and what expense it would come at )

···

On 1/8/06, Steve Litt <slitt@earthlink.net> wrote:

Compare them to C :-). Or Perl.