Francis Cianfrocca wrote:
>
I learned Lisp (1.5) in the early 1970s, and this style of programming
seemed to be tied to Lisp at the time, but I actually had used it
earlier. For some reason FORTRAN programmers, including myself when I
was one, don't usually use this style, perhaps because the original
FORTRAN didn't have structured concepts and forced the use of GO TO
statements.
I've always been fascinated by one specific aspect of functional
languages: in theory, it's possible to use tools to prove programs
correct. What a productivity win that would be. At different times, I
used professional programmers who worked for me as guinea pigs to see
just how they would take to the functional style, and the results were
dismal.
You are bringing back memories and dreams I had in graduate school. Nobody where I went to school knew anything about this, but it was all in the library. I still have most of the books I bought on the subject, which date back to the mid-1970s for the most part.
I learned the Von Neumann style first -- it's hard not to when your first programming class is taught on ILLIAC I. But I learned the functional style and prefer it.
But one of the more insightful things I've heard recently is this:
Turing-completeness works against productivity and makes programs less
tool-able. And this is a direct consequence of its power, which
necessarily gives you Godel-undecidability in the general case.
More memories and dreams. Maybe that's why regular expressions are so popular -- because they're equivalent to the lowest common denominator, a finite state machine, about which you can pretty much prove and discover everything.
The Lispish style (conceptualizing programs as artifacts in a
domain-specific language), which Ruby does extremely well in an
imperative-language context, can be said to have the following virtue:
it allows you to work with your problem domain using a language that
is not Turing-complete. (Let that sink in for a moment.) That means
that your programs are easier to reason about, easier to debug, and
eventually, easier to tool. What you lose in expressive power is
trivial because your standard for language power is related to your
problem domain, not to Turing-completeness in general. What you gain
in productivity is potentially immense.
Well ... there are two reasons to write a Domain Specific Language:
1. As a way of organizing your own programming, and
2. As a tool to be used by *others* to perform a job.
For 1, I think any way of doing things that organizes your code for yourself and your team/enterprise that works is in some sense the "right" way, whether that be functional, object-oriented, or even spaghetti code. But for 2, I'm not at all convinced by the DSLs I've seen over the years, including those I've written myself. People in general aren't programmers, and designing a programming language for them may not in fact lead to increases in *their* productivity.
When I look at the main project I've been working on for the past ten years, in my mind, at any rate, it is a domain-specific language. Most of it is written in fairly small Perl scripts, with some graphics done in R. In all, it's probably smaller in lines of code than Rails. I think I could "prove things about its correctness". It badly needs a rewrite/refactoring, given that big chunks of it were developed on a system that only allowed 14-character file names. But it is in fact a *small* application -- we aren't talking millions of lines of code or anything like that. What's large is the *data*, which is typical of "scientific" applications.
···
On 8/8/06, M. Edward (Ed) Borasky <znmeb@cesmail.net> wrote: