The perens in lisp dilects is there for a reson... macros

Hi,

Ruby's birthday is traditionally
February 1994 (I can't remember the exact day),

It was 1993-02-24 for the record.

and I seem to remember
that that predates Java, but I'm not sure.

The first release date of the Java language white paper was somewhere
in 1994, so you might think Ruby predates Java, but in fact:

  * the development of the Java language (formerly named Oak language)
    started in 1991, if I remember correctly.
  * the first public release of Ruby was December 1995.

As a conclusion, Java is slightly older than Ruby.

              matz.

···

In message "Re: the perens in lisp dilects is there for a reson... macros." on Mon, 7 Aug 2006 03:04:47 +0900, dblack@wobblini.net writes:

I've always been fascinated by one specific aspect of functional
languages: in theory, it's possible to use tools to prove programs
correct. What a productivity win that would be. At different times, I
used professional programmers who worked for me as guinea pigs to see
just how they would take to the functional style, and the results were
dismal.

Even though the Church and Turing models of computation are unified
(and were unified long before digital computers existed), there still
seems to be something intuitively attractive to many programmers about
stuffing values into little boxes and being able to go get the values
out of the boxes later. I'm not sure if this is fundamental, or a
side-effect of the fact that our practical computing machines are all
Turing machines and most programmers learn something about "memory"
and "secondary storage" early in their training. This explains why
pure functional programming is stylistically so challenging in
practice, whether or not you believe it's challenging in theory.

But one of the more insightful things I've heard recently is this:
Turing-completeness works against productivity and makes programs less
tool-able. And this is a direct consequence of its power, which
necessarily gives you Godel-undecidability in the general case.

The Lispish style (conceptualizing programs as artifacts in a
domain-specific language), which Ruby does extremely well in an
imperative-language context, can be said to have the following virtue:
it allows you to work with your problem domain using a language that
is not Turing-complete. (Let that sink in for a moment.) That means
that your programs are easier to reason about, easier to debug, and
eventually, easier to tool. What you lose in expressive power is
trivial because your standard for language power is related to your
problem domain, not to Turing-completeness in general. What you gain
in productivity is potentially immense.

···

On 8/8/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:
>

I learned Lisp (1.5) in the early 1970s, and this style of programming
seemed to be tied to Lisp at the time, but I actually had used it
earlier. For some reason FORTRAN programmers, including myself when I
was one, don't usually use this style, perhaps because the original
FORTRAN didn't have structured concepts and forced the use of GO TO
statements.

That's true, but the advantage to Lisp, it seems to me, isn't merely a
perception that has grown from "we did it first" syndrome. I think it
boils down to the fact of Greenspun's Tenth Rule:

  Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C
  or Fortran program contains an ad-hoc, informally-specified bug-ridden
  slow implementation of half of Common Lisp."

More generalized, any sufficiently complicated program written in a
language that doesn't have the same facilities for (as I put it earlier)
jargon-definition as Lisp (which means basically all languages, still,
though some like Ruby are getting awfully close) must essentially
implement part of a Lisp semantic structure for certain types of highly
complex problems. In the case of something sufficiently low-level, such
as C or Fortran, that pretty much means a very scattered Lisp
implementation for particularly complex problems of any type.

For a significantly Lispy language like Ruby, however, there's a
nontrivial percentage of potential problem domains for which Ruby
already contains all the necessary behaviors of Lisp to solve them in
the most efficient manner possible, and in addition to that Ruby
implements some excellent features that are not default to Lisp but
might be created within Lisp to solve certain types of problems. This
makes Ruby more convenient for solving some problems and (roughly)
equally convenient for solving other problems. There are then a
fairly small set of problem types for which one would have to
reimplement Lisp to most-efficiently solve the problem in Ruby.

The key, of course, is to pick the language that best suits what you do
when programming: if your language of choice already does (almost)
everything for which you'd have to create your own syntax in Lisp, you
probably have the right language for the job. If not, you might need to
find a different language that does so -- or use Lisp to abstract the
problem sufficiently to make it easy to solve.

The way this all works can be analogized to trigonometry: in trig, there
are three basic "laws" from which all other rules for doing trigonometry
are derived. Lisp is like those "laws" with a syntax designed to
simplify the process of derivation. Other languages (such as Fortran,
to pick a target at semi-random) are like a set of useful derivations
that you've memorized to take a test in a trig class. One can work
backward from those derivations to arrive at the original three "laws"
of trigonometry, and from there arrive at other derivations that can be
used to solve certain problems, and that tends to be necessary in a lot
of complex problem solving to some degree at least. Some languages
provide closer approximations to those three basic principles by
default, so that less work is necessary to solve problems not central to
the design of the language (I'm looking at you, Ruby), and so on.

These opinions are not representative of the management. No programmers
were harmed in the making of this lengthy ramble. We now return you to
your regularly scheduled programming. Hah. Programming. Get it?

···

On Tue, Aug 08, 2006 at 10:31:30PM +0900, M. Edward (Ed) Borasky wrote:

Charles Hoffman wrote:
>That makes sense to me, except that it doesn't seem all that different
>from what you do with any language by writing functions and classes and
>the like. You end up with all these concepts that you've defined in
>code and given names to, and then work with those. So why do Lispies
>make such a big deal over it?
>
I did something similar a long time ago in macro assembler, which I
learned a long time before I learned Lisp. Interestingly enough, I dug
out my copy of Leo Brodie's "Thinking Forth" last night. That
programming style, and the term "factoring", are prominent in that work
(which by the way is available on line now as a PDF). I think it's
something all programmers of a certain level of maturity do regardless
of language.

--
CCD CopyWrite Chad Perrin [ http://ccd.apotheon.org ]
This sig for rent: a Signify v1.14 production from http://www.debian.org/

John W. Kennedy wrote:

M. Edward (Ed) Borasky wrote:

I once had a boss who claimed to have worked on an IBM 1620. I think
he was trying to impress us as being a "real programmer just like us."
The lab where I worked on a 1620 got rid of it in 1964 ... I'm
guessing he was in junior high school then. :slight_smile:

The 1620 was still a state-of-the-art product in 1964, and was IBM's
only desk-sized machine of the era. If your lab dumped one, it was not
for obsolescence; its niche successor, the 1130, was still in the future
-- and the 1130 was not compatible at all with the 1620, so upgrades
were slow and cautious. (Many 1620s were instead eventually upgraded to
S/360-30 mainframes, which offered a 1620-compatibility option.)

Actually, I was off by two years. We replaced the 1620 with an 1130
towards the end of 1966. So he might well have been a freshman :).

And ... our FORTRAN programs ported fairly easily. The assembly language
programs I ported by hand. Most of them got better in the process. :slight_smile:

Actually, the 1620 was the first computer I programmed, and that was
in 1970-71, my first year in college. Back then there were a few
computers or time-sharing terminals in some high schools, but in high
school, I was more interested in electronic music and synthesizers
than in computers.

The University had a couple of IBM/360 machines by then, but the 1620
was owned by the Engineering school and all freshman engineers had to
take a semester course (C.E. 101 or something like that) which was
half a semester of drafting/engineering graphics with T-squares,
triangles, and french-curves, and half a semester of Fortran II
programming on the 1620. There was also an old Analog computer in the
room next to the 1620.

I've got fond memories of the 1620, some buddies and I even came up
with a new language called SCRUBOL (Scientifically Compatible
Relatively Unusual Basic Operating Language) and wrote a compiler for
it. It had a disk drive, and a calcomp plotter which was on the paper
tape I/O port. You plotted by using the punch paper tape statement in
Fortran, Some other guys took the fast paper tape reader which had
been replaced by the calcomp plotter and interfaced it to a PDP-8.
They had to figure out how to slow it down to get the PDP-8 to read it
reliably.

But then I guess I'm not a real programmer.

···

On 8/20/06, John W. Kennedy <jwkenne@attglobal.net> wrote:

M. Edward (Ed) Borasky wrote:
> I once had a boss who claimed to have worked on an IBM 1620. I think he
> was trying to impress us as being a "real programmer just like us." The
> lab where I worked on a 1620 got rid of it in 1964 ... I'm guessing he
> was in junior high school then. :slight_smile:

The 1620 was still a state-of-the-art product in 1964, and was IBM's
only desk-sized machine of the era. If your lab dumped one, it was not
for obsolescence; its niche successor, the 1130, was still in the future
-- and the 1130 was not compatible at all with the 1620, so upgrades
were slow and cautious. (Many 1620s were instead eventually upgraded to
S/360-30 mainframes, which offered a 1620-compatibility option.)

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

I

M. Edward (Ed) Borasky wrote:

···

dblack@wobblini.net wrote:
For that matter, how did *Google* get its name, and does it have anything to do with Barney Google? :slight_smile:

Google comes from the name of the number 10^100 which is called a googol. Ackording to wikipedia Google originally was a misspelling of this.

--
  Ola Bini (http://ola-bini.blogspot.com)
  JvYAML, RbYAML, JRuby and Jatha contributor
  System Developer, Karolinska Institutet (http://www.ki.se)
  OLogix Consulting (http://www.ologix.com)

  "Yields falsehood when quined" yields falsehood when quined.

Hi --

···

On Mon, 7 Aug 2006, M. Edward (Ed) Borasky wrote:

dblack@wobblini.net wrote:

Hi --

On Mon, 7 Aug 2006, M. Edward (Ed) Borasky wrote:

dblack@wobblini.net wrote:

I think Ruby is slightly older than Java, isn't it? :slight_smile:

Maybe ... Java is 1.5 and Ruby is 1.8.5, so by that measure, yes.

No, I meant older as in... older :slight_smile: Ruby's birthday is traditionally
February 1994 (I can't remember the exact day), and I seem to remember
that that predates Java, but I'm not sure.

Where can one find "a brief history of Ruby?" I'm too lazy to ask Google, especially when Matz is on the list. :slight_smile:

Whoops, I was wrong; it was 1993. See
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/382

David

--
http://www.rubypowerandlight.com => Ruby/Rails training & consultancy
   ----> SEE SPECIAL DEAL FOR RUBY/RAILS USERS GROUPS! <-----
http://dablog.rubypal.com => D[avid ]A[. ]B[lack's][ Web]log
Ruby for Rails => book, Ruby for Rails
http://www.rubycentral.org => Ruby Central, Inc.

Ola Bini wrote:

Kristof Bastiaensen wrote:

Lisp is definitely not a core language. The standard is about 1100 pages,
so it contains most of of the stuff you would expect, string handling,

You seem to confuse the language Common Lisp with the mathematical concept Lisp. Lisp is seven operators and a one-page denotional semantic definition. That's about as small and core as it gets.

Yes ... I wish I could remember who made that distinction and when. In any event, I'm guessing it was in the days of Lisp 1.5, which is certainly a core language. IIRC Lisp 1.5 had some primitive string handling, and some implementations even did floating point. But this was well before Common Lisp 1, Common Lisp 2 or the ANS standard.

As I noted in an earlier post, I'm now off looking for the Ruby "core language". :slight_smile:

···

On Mon, 07 Aug 2006 01:51:08 +0900, M. Edward (Ed) Borasky wrote:

No, I don't. The post I replied to was mentioning practical programming
languages, not mathematical concepts. Lisp can mean the family of
languages, to which Common Lisp, Emacs Lisp, and scheme belong, and the
programming language Common Lisp. But since he was mentioning scheme as a
different language, I concluded that he meant the last. But I should have
clarified that it my post though.

Regards,
Kristof

···

On Mon, 07 Aug 2006 04:08:29 +0900, Ola Bini wrote:

Kristof Bastiaensen wrote:

On Mon, 07 Aug 2006 01:51:08 +0900, M. Edward (Ed) Borasky wrote:

Lisp is definitely not a core language. The standard is about 1100 pages,
so it contains most of of the stuff you would expect, string handling,

You seem to confuse the language Common Lisp with the mathematical
concept Lisp. Lisp is seven operators and a one-page denotional semantic
definition. That's about as small and core as it gets.

The first release date of the Java language white paper was somewhere
in 1994, so you might think Ruby predates Java, but in fact:

  * the development of the Java language (formerly named Oak language)
    started in 1991, if I remember correctly.
  * the first public release of Ruby was December 1995.

As a conclusion, Java is slightly older than Ruby.

                                                        matz.

If one considers the appearance of the famous Dr. Dobbs article to be the
"public release" of Java, it appeared in May 1995. Predates Ruby's public
release but not by much.

Francis Cianfrocca wrote:

>

I learned Lisp (1.5) in the early 1970s, and this style of programming
seemed to be tied to Lisp at the time, but I actually had used it
earlier. For some reason FORTRAN programmers, including myself when I
was one, don't usually use this style, perhaps because the original
FORTRAN didn't have structured concepts and forced the use of GO TO
statements.

I've always been fascinated by one specific aspect of functional
languages: in theory, it's possible to use tools to prove programs
correct. What a productivity win that would be. At different times, I
used professional programmers who worked for me as guinea pigs to see
just how they would take to the functional style, and the results were
dismal.

You are bringing back memories and dreams I had in graduate school. :slight_smile: Nobody where I went to school knew anything about this, but it was all in the library. I still have most of the books I bought on the subject, which date back to the mid-1970s for the most part.

I learned the Von Neumann style first -- it's hard not to when your first programming class is taught on ILLIAC I. :slight_smile: But I learned the functional style and prefer it.

But one of the more insightful things I've heard recently is this:
Turing-completeness works against productivity and makes programs less
tool-able. And this is a direct consequence of its power, which
necessarily gives you Godel-undecidability in the general case.

More memories and dreams. :slight_smile: Maybe that's why regular expressions are so popular -- because they're equivalent to the lowest common denominator, a finite state machine, about which you can pretty much prove and discover everything.

The Lispish style (conceptualizing programs as artifacts in a
domain-specific language), which Ruby does extremely well in an
imperative-language context, can be said to have the following virtue:
it allows you to work with your problem domain using a language that
is not Turing-complete. (Let that sink in for a moment.) That means
that your programs are easier to reason about, easier to debug, and
eventually, easier to tool. What you lose in expressive power is
trivial because your standard for language power is related to your
problem domain, not to Turing-completeness in general. What you gain
in productivity is potentially immense.

Well ... there are two reasons to write a Domain Specific Language:

1. As a way of organizing your own programming, and
2. As a tool to be used by *others* to perform a job.

For 1, I think any way of doing things that organizes your code for yourself and your team/enterprise that works is in some sense the "right" way, whether that be functional, object-oriented, or even spaghetti code. :slight_smile: But for 2, I'm not at all convinced by the DSLs I've seen over the years, including those I've written myself. People in general aren't programmers, and designing a programming language for them may not in fact lead to increases in *their* productivity.

When I look at the main project I've been working on for the past ten years, in my mind, at any rate, it is a domain-specific language. Most of it is written in fairly small Perl scripts, with some graphics done in R. In all, it's probably smaller in lines of code than Rails. I think I could "prove things about its correctness". It badly needs a rewrite/refactoring, given that big chunks of it were developed on a system that only allowed 14-character file names. :slight_smile: But it is in fact a *small* application -- we aren't talking millions of lines of code or anything like that. What's large is the *data*, which is typical of "scientific" applications.

···

On 8/8/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

"Francis Cianfrocca" <garbagecat10@gmail.com> writes:

Even though the Church and Turing models of computation are unified
(and were unified long before digital computers existed), there still
seems to be something intuitively attractive to many programmers about
stuffing values into little boxes and being able to go get the values
out of the boxes later. I'm not sure if this is fundamental, or a
side-effect of the fact that our practical computing machines are all
Turing machines and most programmers learn something about "memory"
and "secondary storage" early in their training. This explains why
pure functional programming is stylistically so challenging in
practice, whether or not you believe it's challenging in theory.

Turing machines derive from a model of a human working with pencil and
paper. That's pretty fundamental.

Steve

Francis Cianfrocca ha scritto:

>

I learned Lisp (1.5) in the early 1970s, and this style of programming
seemed to be tied to Lisp at the time, but I actually had used it
earlier. For some reason FORTRAN programmers, including myself when I
was one, don't usually use this style, perhaps because the original
FORTRAN didn't have structured concepts and forced the use of GO TO
statements.

I've always been fascinated by one specific aspect of functional
languages: in theory, it's possible to use tools to prove programs
correct. What a productivity win that would be.

notice that to proof some properties of a program you don't necessarily need a functional language.
For example, you may love to take a look at Spark[1], a statically verifiable subset of Ada, or at Vault[2].

Other languages can proof correctness for "some"functionalities.
I had the chance to work with NesC[3] wich is a nifty language for embbeded systems, basically C+concurrency+components, where the compiler checks that there are no race conditions in concurrent code (it gets some false positives, but no wrong code passes).

[1] http://www.praxis-his.com/sparkada/intro.asp
[2] http://research.microsoft.com/vault/intro_page.htm
[2] http://nescc.sourceforge.net/

···

On 8/8/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote:

Chad Perrin wrote:

perception that has grown from "we did it first" syndrome. I think it
boils down to the fact of Greenspun's Tenth Rule:

  Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C
  or Fortran program contains an ad-hoc, informally-specified bug-ridden
  slow implementation of half of Common Lisp."
  

Well ... I think I disagree with this, having written some moderately complicated FORTRAN programs and having read through many more. What they all almost always contain, however, is a domain-specific language that controls where they find their inputs, what processing the do on them, where they put their outputs and in what formats, etc., on what we used to call "control cards". Some of the parsers and DSLs of this nature are quite complex and rich, but they are in no sense of the word "Lisp".

Curiously enough, if you read Chuck Moore's description of the evolution of FORTH, you'll find something similar -- every sufficiently complicated FORTRAN program contained a huge "case" statement which was a "language interpreter".

By the way, one of the classic "demonstrations of Artificial Intelligence", ELIZA, was written not in Lisp but in a list-processing language patterned on Lisp but implemented in (pregnant pause) FORTRAN. But the SLIP interpreter, rather than being "sufficiently complicated" was in fact "delightfully simple."

dblack@wobblini.net wrote:

Whoops, I was wrong; it was 1993. See
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/382

Thanks!! In February of 1993, I was just starting on the project that has consumed most of my "paid programming" time since then. It started out as Korn shell augmented with "gawk" for the more complicated pieces, then got migrated to Perl 4 as soon as I discovered Perl 4. It picked up the R component about 2000, but it's still mostly Perl 4. The Java 1.1 program was done in 1997.

So, going back to another thread, it looks like the "core language" of Ruby is the object model and the Perl features. Is there by any chance a place where I could get the Ruby 1.0 source? That might be *very* interesting.

Does it *really* matter which came first?

···

Le 06-08-08 à 01:37, Francis Cianfrocca a écrit :

The first release date of the Java language white paper was somewhere
in 1994, so you might think Ruby predates Java, but in fact:

  * the development of the Java language (formerly named Oak language)
    started in 1991, if I remember correctly.
  * the first public release of Ruby was December 1995.

As a conclusion, Java is slightly older than Ruby.

                                                        matz.

If one considers the appearance of the famous Dr. Dobbs article to be the
"public release" of Java, it appeared in May 1995. Predates Ruby's public
release but not by much.

--
Jeremy Tregunna
jtregunna@blurgle.ca

"One serious obstacle to the adoption of good programming languages is the notion that everything has to be sacrificed for speed. In computer languages as in life, speed kills." -- Mike Vanier

M. Edward (Ed) Borasky wrote:
> Well ... there are two reasons to write a Domain Specific Language:

1. As a way of organizing your own programming, and
2. As a tool to be used by *others* to perform a job.

For 1, I think any way of doing things that organizes your code for
yourself and your team/enterprise that works is in some sense the
"right" way, whether that be functional, object-oriented, or even
spaghetti code. :slight_smile: But for 2, I'm not at all convinced by the DSLs I've
seen over the years, including those I've written myself. People in
general aren't programmers, and designing a programming language for
them may not in fact lead to increases in *their* productivity.

I'm glad you enjoyed the trip down memory lane ;-). I gave up on
functional programming over a dozen years ago, for two reasons: first, I
learned the hard way that writing really good compilers for lambda-based
languages requires a huge amount of mathematical knowledge, and mine
doesn't go beyond partial diffeqs. Second, my experiments with ordinary
professional programmers proved that the proportion of them who are
mentally receptive to the functional style is too small to be
commercially valuable. I was completely taken with FP myself, but one
programmer does not a team make.

To your point about regexes: remember that they are level-0 languages,
whereas most programming languages (including useful DSLs) are level-1
languages (context-free grammars). AFAIK, a language that generates
expressions that can be reduced to NFAs can not be Turing-complete, but
correct me if I'm wrong on that.

A poorly-designed DSL is just as useless as any other tool that doesn't
reflect an appropriate modeling of the problem domain (not too powerful,
not too trivial, but just right). Still, I think it's pretty exciting
that Ruby makes DSLs easy and graceful, thus activating for imperative
programmers one of the most useful aspects of Lisp. (I'm sure that
statement will be hotly contested by Lisp partisans, but let them rant.)

With DSLs, I'm thinking about tools for programmers, not for
non-programmers. (Although there is a very interesting class of
highly-successful mini-languages for non-programmers, like MS Office
macros.) My point about "tooling" was in regard to automatic programs
that can reason about other programs. DSLs and other mini-languages are
far easier to write workbenches and IDEs for, which I think is really
interesting and potentially a huge productivity boost. And again, it's
the reduction from full Turing-completeness that may be the secret
sauce. I wish I could prove that, but I've already told you that I'm no
mathematician, so it's a hunch at best.

···

--
Posted via http://www.ruby-forum.com/\.

Steven Lumos wrote:

Turing machines derive from a model of a human working with pencil and
paper. That's pretty fundamental.

Steve

So then you would argue that imperative languages are fundamentally more
natural for people to program in than functional languages?
Notwithstanding that they are mathematically equivalent?

I'm not an FP partisan myself so I won't argue with you, but I have a
feeling some Lisp and Ocaml fanboys are firing up their blowtorches for
you :wink:

···

--
Posted via http://www.ruby-forum.com/\.

gabriele renzi wrote:

Francis Cianfrocca ha scritto:

> notice that to proof some properties of a program you don't
necessarily

need a functional language.
For example, you may love to take a look at Spark[1], a statically
verifiable subset of Ada, or at Vault[2].

I'm not surprised that someone would mention Diana, the Ada subset with
a formal semantics. People have been working on things like this for
decades, and all of the work is interesting. However, I'm more
interested in the economic value of high-quality software. And the
economic dimensions include such things as widespread availability of
reasonably-priced talent, high productivity, good stability in
production, etc. I think that because of Rails, Ruby may be poised to
break out into the programming mainstream, which is interesting because
of Ruby's underappreciated potential to fundamentally improve the
dynamics of software delivery.

Auguri Gabriele, e ti ringrazio per aver scritto :wink:

···

--
Posted via http://www.ruby-forum.com/\.

Steven Lumos wrote:

"Francis Cianfrocca" <garbagecat10@gmail.com> writes:
  

Even though the Church and Turing models of computation are unified
(and were unified long before digital computers existed), there still
seems to be something intuitively attractive to many programmers about
stuffing values into little boxes and being able to go get the values
out of the boxes later. I'm not sure if this is fundamental, or a
side-effect of the fact that our practical computing machines are all
Turing machines and most programmers learn something about "memory"
and "secondary storage" early in their training. This explains why
pure functional programming is stylistically so challenging in
practice, whether or not you believe it's challenging in theory.
    
Turing machines derive from a model of a human working with pencil and
paper. That's pretty fundamental.
  

First of all, while a Turing machine is a great "experimental animal", our "practical computing machines" are Von Neumann machines rather than Turing machines. And Von Neumann machines derive from a model of a human working with a mechanical desk calculator. In fact, the people -- rooms full of people -- who operated them were called "computers". Properly speaking, a Von Neumann machine is an "electronic" computer or "automatic" computer -- a machine doing what people used to do.

Second, I don't think the actual unification of Church and Turing models occurred *long* before digital computers existed. The basic Turing and Godel and Church papers were written in the early 1930s, and by the late 1930s there were working relay-based digital computers. Vacuum tube machines started showing up (in public, anyhow) in the early 1950s and in "war rooms" shortly after the end of WW II.