"Good Ideas, Through the Looking Glass"

Let's paraphrase this:

"Although the possibility of travelling longer distances at greater speed
was heralded as one of the great consequences
of Orville and Wilbur Wright's profound idea of combining an internal
combustion engine with their newly uncovered mastery of controlling
  the flight of a glider in three-axes, it quickly turned
out to enable a dangerous technique and to constitute an
unlimited source of pitfalls. Man must stay on the ground and out of such
dangerous flying machines, if the possibility of crashing was not to become
a nightmare. Aeroplanes were recognized as an extremely bad idea."

Sure self-modification is powerful, and anything powerful has its
dangers, that doesn't mean that we've figured out ways to contain and
control the dangers.

Professor Wirth is known for his strong, often obstinate opinions.
Reading a bit further, he uses the references paragraph as a
motivation for the introduction of various indirect addressing modes
in computer architectures to remove the need for program modification
to implement concepts like arrays. He then goes on to disparage the
notion of array descriptors for array bounds checking.

This last argument seems to be based solely on perceived 'efficiency'.

He seems to see things in stark black and white, techniques are either
'good' or 'bad' depending on his subjective assessment and without
regards to context or the evolution of technology with it's attendant
shift in the economies of processors and memory.

Look at his opinion about functional programming:

"To postulate a state-less model of computation on top of a machinery
whose most eminent characteristic is state, seems to be an odd idea,
to say the least. The gap between model and machinery is wide, and
therefore costly to bridge. No hardware support feature can wash this
fact aside: It remains a bad idea for practice. This has in due time
also been recognized by the protagonists of functional languages. They
have introduced state (and variables) in various tricky ways. The
purely functional character has thereby been compromised and
sacrificed. The old terminology has become deceiving."

This paragraph is deceiving. FP advocates have not "introduced state
(and variables) in various tricky ways." because of the cost of
implementation, they have done it because sometimes, like when you are
doing IO you NEED to have side-effects.

Then he goes on to minimize OOP "After all, the old cornerstones of
procedural programming reappear, albeit embedded in a new terminology:
Objects are records, classes are types, methods are procedures, and
sending a method is equivalent to calling a procedure. True, records
now consist of data fields and, in addition, methods; and true, the
feature called inheritance allows the construction of heterogeneous
data structures, useful also without object-orientation. Was this
change of terminology expressing an essential paradigm shift, or was
it a vehicle for gaining attention, a
'sales trick'?"

I've actually gone head to head with the good professor about his
limited view of OOP. Many years ago I attended a lecture he gave at
UNC, as a guest of Fred Brooks. His talk was on "object oriented
programming with Oberon." While I can't recall the exact details, he
basically boiled OOP down to a stylized use of case statements using
variant records. When I suggested that perhaps OOP might be more
about decoupling software components by using Kay's message semantics
a la Smalltalk, he first tried to argue, but apparently failing to
understand the question, quickly devolved to a statement like "All you
American programmers are hacks."

There's a famous story about a similar lecture he gave at Apple, where
someone else pushed back in a similar way. If Oberon doesn't have
encapsulation how can it be object-oriented. In this case, Wirth's
ultimate rejoinder boiled down to "who can really say what
object-oriented means." To which the questioner responded, "Well, I
suppose I do, I'm Alan Kay and I invented the term."

This story has appeared in various forms. Here's a reference from the
ruby-talk list:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/18422

I do have to say though that the interpretation in this post seems a
little off in that it stresses inheritance, which Kay doesn't seem to
be an essential feature of OOP. Here's a passage from his article
about the early history of Smalltalk:

   "By this time (1972) most of Smalltalk's schemes had been sorted out
     into six main ideas that were in accord with the initial premises in
     designing the interpreter.

         1. Everything is an object
         2. Objects communicate by sending and receiving messages
             (in terms of objects)
         3. Objects have their own memory (in terms of objects)
         4. Every object is an instance of a class (which must be an object)
         5. The class holds the shared behavior for its instances (in
the form of
             objects in a pogram list
         6. To eval a program list, control is passed to the first
object and the
              remainder is treated as its message
     The 1st three principals are what objects "are about"--how
     they are seen and used from "the outside." Thse did not require any
     modification over the years. The last three --objects from the
     inside--were tinkered with in every version of Smalltalk (and in
     subsequent OOP designs)."

So the lasting essentials of object-orientedness for Kay are
everything being an object, computation built solely on messages
between objects, and encapsulation of object state. These are shared
with Ruby, note that Classes are an optional (even experimental)
feature, and inheritance isn't even mentioned.

Now there are some good things in the Wirth paper. I like his
observations about how computer architects often get things quite
wrong when they define complex instructions to 'help' say compiler
writers. This resonated with my early experience at IBM when on the
first day on the job I was handed a specification for a new computer
called FS, which was then supposed to be the replacement for the
S/370, for a cogent analysis of where THAT was likely to (and in fact
did) end up, see Memo 125 a
confidential IBM memo from 1974 which I never expected to see again.

On the whole though, despite his notable accomplishments such as
designing Pascal, and popularizing interpreters using what we now call
byte-codes. He seems to be stuck in the late 1960s/early 1970s, and
dismisses anything which doesn't fit into his limited view which seems
to require software designs which are quite close to the hardware
architectures he liked back then.

He really seems to have been re-inventing Pascal ever since the first
version, Modula and Oberon are really just slightly different Pascals.

···

On 10/16/06, Rich Morin <rdm@cfcl.com> wrote:

FYI, here is a quote which seems relevant to Ruby:

  Although the possibility of program modification at
  run-time was heralded as one of the great consequences
  of John von Neumann's profound idea of storing
  program and data in the same memory, it quickly turned
  out to enable a dangerous technique and to constitute an
  unlimited source of pitfalls. Program code must remain
  untouched, if the search for errors was not to become a
  nightmae. Program's self-modification was recognized as
  an extremely bad idea.

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Rick DeNatale wrote:
[snip]

There's a famous story about a similar lecture he gave at Apple, where
someone else pushed back in a similar way. If Oberon doesn't have
encapsulation how can it be object-oriented. In this case, Wirth's
ultimate rejoinder boiled down to "who can really say what
object-oriented means." To which the questioner responded, "Well, I
suppose I do, I'm Alan Kay and I invented the term."

[snip]

The most authoritative-looking account[1] I could find say that it was
not Wirth giving the lecture (and that Alan didn't use his name in the
retort). Good story, though :slight_smile:

[1] http://c2.com/cgi/wiki?HeInventedTheTerm

Thank you for linking to this. It's fascinating.

···

On 10/16/06, Rick DeNatale <rick.denatale@gmail.com> wrote:

On 10/16/06, Rich Morin <rdm@cfcl.com> wrote:
> FYI, here is a quote which seems relevant to Ruby:
>
Now there are some good things in the Wirth paper. I like his
observations about how computer architects often get things quite
wrong when they define complex instructions to 'help' say compiler
writers. This resonated with my early experience at IBM when on the
first day on the job I was handed a specification for a new computer
called FS, which was then supposed to be the replacement for the
S/370, for a cogent analysis of where THAT was likely to (and in fact
did) end up, see Memo 125 a
confidential IBM memo from 1974 which I never expected to see again.

IMO, Bertand Meyer and his colleagues are ones who have taken the baton from Wirth and gone forward to develop a modern, elegant, and fully object-oriented language from Pascal (via Ada) -- Eiffel. In some ways, the antithesis of Ruby, Eiffel, because it so well designed and because it is backed up with excellent libraries, is also a very satisfying language to program in.

Regards, Morton

···

On Oct 16, 2006, at 3:15 PM, Rick DeNatale wrote:

On the whole though, despite his notable accomplishments such as
designing Pascal, and popularizing interpreters using what we now call
byte-codes. He seems to be stuck in the late 1960s/early 1970s, and
dismisses anything which doesn't fit into his limited view which seems
to require software designs which are quite close to the hardware
architectures he liked back then.

He really seems to have been re-inventing Pascal ever since the first
version, Modula and Oberon are really just slightly different Pascals.

There's a famous story about a similar lecture he gave at Apple, where
someone else pushed back in a similar way. If Oberon doesn't have
encapsulation how can it be object-oriented. In this case, Wirth's
ultimate rejoinder boiled down to "who can really say what
object-oriented means." To which the questioner responded, "Well, I
suppose I do, I'm Alan Kay and I invented the term."

I'm not sure he has a moral claim to how the term is used now just
because he invented it. After all, in Kay's mind, C++, Java et al.,
are not Object Oriented languages. But I suspect that most of us here
would say they are.

But I agree with your general point. I read the paper and I have never
before seen so many over-generalisations, straw men arguments and
ad-hominen fallacies outside of a party political broadcast.

Martin

> FYI, here is a quote which seems relevant to Ruby:
>
> Although the possibility of program modification at
> run-time was heralded as one of the great consequences
> of John von Neumann's profound idea of storing
> program and data in the same memory, it quickly turned
> out to enable a dangerous technique and to constitute an
> unlimited source of pitfalls. Program code must remain
> untouched, if the search for errors was not to become a
> nightmae. Program's self-modification was recognized as
> an extremely bad idea.

Let's paraphrase this:

"Although the possibility of travelling longer distances at greater speed
was heralded as one of the great consequences
of Orville and Wilbur Wright's profound idea of combining an internal
combustion engine with their newly uncovered mastery of controlling
  the flight of a glider in three-axes, it quickly turned
out to enable a dangerous technique and to constitute an
unlimited source of pitfalls. Man must stay on the ground and out of such
dangerous flying machines, if the possibility of crashing was not to
become
a nightmare. Aeroplanes were recognized as an extremely bad idea."

And it is, sometimes it is very very brave not to go directly for the most
advanced thing imediately.
Human's nature assures that we go there anyway :slight_smile:
I think that the merit of reading Wirth's article is not to agree with him,
but to be more
critical more sceptical
Let me use Rick's paraphrase,
Great we can fly now, and that is a beautiful and important accomplishment,
but...
maybe I shall not use it *only* because it is new and sophisticated, what
are the shortcomings, sometimes the old method like *walking* is better.

I think that when Wirth says bad ideas he is plain wrong, but he is a well
recognized man in his science and not the youngest any more. So I translated
certain statements from
"bad idea" --> "dangerous idea" etc. etc. and in that light I found his POVs
most interesting.

As a matter of fact he does not so much critisize the concepts but the Human
Way to use them, well that's how I read it :wink:

Cheers
Robert

Sure self-modification is powerful, and anything powerful has its

dangers, that doesn't mean that we've figured out ways to contain and
control the dangers.

Professor Wirth is known for his strong, often obstinate opinions.
Reading a bit further, he uses the references paragraph as a
motivation for the introduction of various indirect addressing modes
in computer architectures to remove the need for program modification
to implement concepts like arrays. He then goes on to disparage the
notion of array descriptors for array bounds checking.

This last argument seems to be based solely on perceived 'efficiency'.

He seems to see things in stark black and white, techniques are either
'good' or 'bad' depending on his subjective assessment and without
regards to context or the evolution of technology with it's attendant
shift in the economies of processors and memory.

Look at his opinion about functional programming:

"To postulate a state-less model of computation on top of a machinery
whose most eminent characteristic is state, seems to be an odd idea,
to say the least. The gap between model and machinery is wide, and
therefore costly to bridge. No hardware support feature can wash this
fact aside: It remains a bad idea for practice. This has in due time
also been recognized by the protagonists of functional languages. They
have introduced state (and variables) in various tricky ways. The
purely functional character has thereby been compromised and
sacrificed. The old terminology has become deceiving."

This paragraph is deceiving. FP advocates have not "introduced state
(and variables) in various tricky ways." because of the cost of
implementation, they have done it because sometimes, like when you are
doing IO you NEED to have side-effects.

Then he goes on to minimize OOP "After all, the old cornerstones of
procedural programming reappear, albeit embedded in a new terminology:
Objects are records, classes are types, methods are procedures, and
sending a method is equivalent to calling a procedure. True, records
now consist of data fields and, in addition, methods; and true, the
feature called inheritance allows the construction of heterogeneous
data structures, useful also without object-orientation. Was this
change of terminology expressing an essential paradigm shift, or was
it a vehicle for gaining attention, a
'sales trick'?"

I've actually gone head to head with the good professor about his
limited view of OOP. Many years ago I attended a lecture he gave at
UNC, as a guest of Fred Brooks. His talk was on "object oriented
programming with Oberon." While I can't recall the exact details, he
basically boiled OOP down to a stylized use of case statements using
variant records. When I suggested that perhaps OOP might be more
about decoupling software components by using Kay's message semantics
a la Smalltalk, he first tried to argue, but apparently failing to
understand the question, quickly devolved to a statement like "All you
American programmers are hacks."

There's a famous story about a similar lecture he gave at Apple, where
someone else pushed back in a similar way. If Oberon doesn't have
encapsulation how can it be object-oriented. In this case, Wirth's
ultimate rejoinder boiled down to "who can really say what
object-oriented means." To which the questioner responded, "Well, I
suppose I do, I'm Alan Kay and I invented the term."

This story has appeared in various forms. Here's a reference from the
ruby-talk list:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/18422

I do have to say though that the interpretation in this post seems a
little off in that it stresses inheritance, which Kay doesn't seem to
be an essential feature of OOP. Here's a passage from his article
about the early history of Smalltalk:

   "By this time (1972) most of Smalltalk's schemes had been sorted out
     into six main ideas that were in accord with the initial premises in
     designing the interpreter.

         1. Everything is an object
         2. Objects communicate by sending and receiving messages
             (in terms of objects)
         3. Objects have their own memory (in terms of objects)
         4. Every object is an instance of a class (which must be an
object)
         5. The class holds the shared behavior for its instances (in
the form of
             objects in a pogram list
         6. To eval a program list, control is passed to the first
object and the
              remainder is treated as its message
     The 1st three principals are what objects "are about"--how
     they are seen and used from "the outside." Thse did not require any
     modification over the years. The last three --objects from the
     inside--were tinkered with in every version of Smalltalk (and in
     subsequent OOP designs)."

So the lasting essentials of object-orientedness for Kay are
everything being an object, computation built solely on messages
between objects, and encapsulation of object state. These are shared
with Ruby, note that Classes are an optional (even experimental)
feature, and inheritance isn't even mentioned.

Now there are some good things in the Wirth paper. I like his
observations about how computer architects often get things quite
wrong when they define complex instructions to 'help' say compiler
writers. This resonated with my early experience at IBM when on the
first day on the job I was handed a specification for a new computer
called FS, which was then supposed to be the replacement for the
S/370, for a cogent analysis of where THAT was likely to (and in fact
did) end up, see Memo 125 a
confidential IBM memo from 1974 which I never expected to see again.

On the whole though, despite his notable accomplishments such as
designing Pascal, and popularizing interpreters using what we now call
byte-codes. He seems to be stuck in the late 1960s/early 1970s, and
dismisses anything which doesn't fit into his limited view which seems
to require software designs which are quite close to the hardware
architectures he liked back then.

He really seems to have been re-inventing Pascal ever since the first
version, Modula and Oberon are really just slightly different Pascals.

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Nevertheless the following still holds :wink:

···

On 10/16/06, Rick DeNatale <rick.denatale@gmail.com> wrote:

On 10/16/06, Rich Morin <rdm@cfcl.com> wrote:

--
The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man.

- George Bernhard Shaw

Morton:

IMO, Bertand Meyer and his colleagues are ones who have taken the
baton from Wirth and gone forward to develop a modern, elegant, and
fully object-oriented language from Pascal (via Ada) -- Eiffel. In
some ways, the antithesis of Ruby, Eiffel, because it so well
designed and because it is backed up with excellent libraries, is
also a very satisfying language to program in.

Maybe I'm mis-parsing that. Are you saying that Eiffel is the
antithesis of Ruby because it is well-designed and has excellent
libraries?

Martin

Phrogz wrote:

Rick DeNatale wrote:
[snip]
> There's a famous story about a similar lecture he gave at Apple, where
> someone else pushed back in a similar way. If Oberon doesn't have
> encapsulation how can it be object-oriented. In this case, Wirth's
> ultimate rejoinder boiled down to "who can really say what
> object-oriented means." To which the questioner responded, "Well, I
> suppose I do, I'm Alan Kay and I invented the term."
[snip]

The most authoritative-looking account[1] I could find say that it was
not Wirth giving the lecture (and that Alan didn't use his name in the
retort). Good story, though :slight_smile:

[1] http://c2.com/cgi/wiki?HeInventedTheTerm

* "So, this product doesn't support inheritance, right?"
   "that's right"
* "And it doesn't support polymorphism, right?"
   "that's right"
* "And it doesn't support encapsulation, right?"
   "that's correct"
* "So, it doesn't seem to me like it's object-oriented".

That's just seems wrong.

Here's a description of OBJECT-ORIENTED PROGRAMMING IN OBERON-2
http://www.statlab.uni-heidelberg.de/projects/oberon/kurs/www/Oberon2.OOP.html

To me, he seemed to say Eiffel is the antithesis of Ruby, /but/, since
it's well-designed and has excellent libraries, it's also a fun
language to program in. The "antithesis" bit probably comes from the
fact that Eiffel requires you to define certain things much more
strictly than Ruby, it being the "poster child" for Design By
Contract.

···

On 10/17/06, Martin Coxall <pseudo.meta@gmail.com> wrote:

Morton:

Maybe I'm mis-parsing that. Are you saying that Eiffel is the
antithesis of Ruby because it is well-designed and has excellent
libraries?

--
Bira

http://sinfoniaferida.blogspot.com

Absolutely not. The antithetic elements are strong variable typing, multiple inheritance, and emphasis on compile-time correctness. Good design and excellent libraries are attributes it has in common with Ruby.

My point, which I apparently didn't bring off, is that I believe good language design and good libraries are more important to delivering a satisfying programming experience than more idiomatic attributes such as dynamic/static variable typing or single/multiple inheritance models.

Regards, Morton

···

On Oct 17, 2006, at 4:46 AM, Martin Coxall wrote:

Morton:

IMO, Bertand Meyer and his colleagues are ones who have taken the
baton from Wirth and gone forward to develop a modern, elegant, and
fully object-oriented language from Pascal (via Ada) -- Eiffel. In
some ways, the antithesis of Ruby, Eiffel, because it so well
designed and because it is backed up with excellent libraries, is
also a very satisfying language to program in.

Maybe I'm mis-parsing that. Are you saying that Eiffel is the
antithesis of Ruby because it is well-designed and has excellent
libraries?

First, I don't know how accurately the description on Ward's Wiki
reflects what questions were actually asked of the presenter on
Oberon.

More importantly though, note that it says that it was the original
Oberon, not Oberon 2. Since some language designers like to name
languages with version numbers, or dates, it's easy to confuse say
Oberon with Oberon 2, or Simula with Simula 67.

Now, getting to the referenced OBERON 2 paper. Thanks for the memory refresh!

It was Oberon 2 which Herr Doctor Wirth presented at the UNC seminar I
mentioned. Having now looked at this paper I remember more of why I
had the reaction I did at the time. He even used the same examples in
the talk IIIRC.

First of all, that paper pretty much takes it upon itself to define
object oriented programming, quoting:

"Object-oriented programming is based on three concepts: data
abstraction, type extension and dynamic binding of a message to the
procedure that implements it. All these concepts are supported by
Oberon-2. We first discuss type extension since this is perhaps the
most important of the three notions, and then turn to type-bound
procedures, which allow data abstraction and dynamic binding."

Now whether or not you think that Alan Kay 'owns' the definition of
OOP (more on that later). I don't think I've seen OOP expressed
exactly like this anywhere else. I guess it's kind of a Humpty Dumpty
in Wonderland technique, which is similar to some of the arguments in
Wirth's paper which prompted this thread.

Type extension in Oberon 2 is really nothing more than Hoare's "Record
Classes" which predated Oberon. let alone Oberon 2 by over 20 years.
This technique for modeling abstract data types, was the key
difference between Simula and Simula 67, and was also the main element
of C++, which Stroustrup originally viewed as an implementation of
Simula 67 based on C rather than Algol.

Abstract data types are still firmly rooted in the traditional
computational model which separates programs from the data on which
the programs operate. Programs were viewed as boxes with an input
funnel and an ouput chute. You poured in some data, the program
chewed on it, and data spewed out. Some here might be old enough to
remember the old HIPO diagrams (Hierarchy plus Input Process Output).
Which was the new-age (c. 1970) improvement on flowcharts.

Kay's conception of OOP was that each object should be like a small
computer which combined data and program (methods) as an
implementation, and that the computation model should not require
details of the implementation to be known across the interface between
objects. This is a strong form of encapsulation.

In contrast, the encapsulation afforded by abstract data type
languages such as Oberon 2, C++, Eiffel, and to some extent, Java. Is
a kind of pseudo-encapsulation in which the compiler must examine
source code on both sides of an interface. Details of the
implementation are then 'hidden' by making access to them illegal. In
Oberon or C++ encapsulation errors result in compiler, or perhaps
linkage editor, errors, assuming that the programmer hasn't violated
encapsulation with low-level programming tricks. In a language with
Kay-encapsulation, implementation details just aren't visible from
another object.

Now comparing Oberon 2, as described in this paper, with C++, they
seem very similar, Abstract Data Types, which can be related via type
inheritance, and virtual functions, which have a prototypical
implementation using vtables. Note the description of how dynamic
binding to type-bound functions:

     "A message v.P is implemented as v^.tag^.ProcTab[Index-of-P]. "

This is exactly the same as Stroustrup's implementation of virtual
function calls in C++. It's also an example of the kind of
implementation level information which is needed at compile and link
time, since Index-of-P depends on implementation details of the target
object. It's possible to get a little more flexibility by using more
dynamic techniques, but the implementors of these languages typically
see binding method selectors to an integer in the range 0 to the
number of methods -1 as a nearly irresistable premature optimization.
It's also one of the reasons that systems written in these kind of
languages have long build times, since there is a high degree of
dependency between source files.

Now the new feature of Oberon 2, which prompted MY questioning of
Wirth during the seminar is it Message Records, which make Oberon
slouch a bit towards Smalltalk's computation model, but stumble at the
first step.

Note that the rectangle's message handler is nothing more than a hand
coded typecase statement which discriminates on the message type.
Oberon 2 didn't really introduce late bound messages to the language,
it simply extended with (the typecase statement) to allow for type
extension.

The actual message handling is just a stylized use of this feature.
Unlike Ruby or Smalltalk, message handling isn't a uniform mechanism
at the core of the language, it's just a design pattern.

Languages like Ruby and Smalltalk typically do a much better job of
dispatching such dynamic messages than this kind of implementation
could ever do. Look at the analysis of the performance of messages:

    "- Messages are interpreted by the handler at run time and in
sequential order. This is much slower than the dynamic binding
mechanism of type-bound procedures, which requires only a table lookup
with a constant offset. Message records are much like messages in
Smalltalk [7], which are also interpreted at run time."

   The unstated assumption here is that messages in Oberon 2 is slow
that Smalltalk/Ruby dynamic message sending must be as slow. The
ad-hoc nature of messages precludes the kind of global optimizations
using caches and other techniques typically used by a Smalltalk
implementation. Ruby also does a certain amount of caching to avoid
running the class chain on every call. The Oberon approach can't avoid
running those custom written handler procedures.

This difference between having a powerful, high-level runtime which
can optimize system-wide facilities like message dispatch and garbage
collection is one crucial aspect which distinguishes the
Smalltalk/Ruby and C++/Oberon clans.

And that brings things back to Kay's view of OOP. The key concept in
KayOOP is strong encapsulation which is nearly diametrically opposed
to strong type checking. Message oriented computation, and GC are the
keys to providing that encapsulation. GC might not be so obvious, but
it would be hard to encapsulate interfaces between objects if the
objects needed to look out for each other's survival. In Kay's OOP,
classes and inheritance are secondary, and are there to provide for
factoring the sharing of implementation between objects with similar
implementation, not for modeling types so that procedures can run over
the data.

Whether or not Alan Kay has the right to claim the definition of OOP,
the fact that it got hijacked by Peter Wegner in a paper which
misunderstood the differences in the meanings of classes and
inheritance in the two major groups of languages whihc use those terms
quite differently, is the foundation of the tower of babel which has
confounded conversations about Object Orientation for so many years.

···

On 10/17/06, Isaac Gouy <igouy@yahoo.com> wrote:

Phrogz wrote:
> Rick DeNatale wrote:
> [snip]
> > There's a famous story about a similar lecture he gave at Apple, where
> > someone else pushed back in a similar way. If Oberon doesn't have
> > encapsulation how can it be object-oriented. In this case, Wirth's
> > ultimate rejoinder boiled down to "who can really say what
> > object-oriented means." To which the questioner responded, "Well, I
> > suppose I do, I'm Alan Kay and I invented the term."
> [snip]
>
> The most authoritative-looking account[1] I could find say that it was
> not Wirth giving the lecture (and that Alan didn't use his name in the
> retort). Good story, though :slight_smile:
>
> [1] http://c2.com/cgi/wiki?HeInventedTheTerm

* "So, this product doesn't support inheritance, right?"
   "that's right"
* "And it doesn't support polymorphism, right?"
   "that's right"
* "And it doesn't support encapsulation, right?"
   "that's correct"
* "So, it doesn't seem to me like it's object-oriented".

That's just seems wrong.

Here's a description of OBJECT-ORIENTED PROGRAMMING IN OBERON-2
http://www.statlab.uni-heidelberg.de/projects/oberon/kurs/www/Oberon2.OOP.html

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Rick DeNatale wrote:

>
> Phrogz wrote:
> > Rick DeNatale wrote:
> > [snip]
> > > There's a famous story about a similar lecture he gave at Apple, where
> > > someone else pushed back in a similar way. If Oberon doesn't have
> > > encapsulation how can it be object-oriented. In this case, Wirth's
> > > ultimate rejoinder boiled down to "who can really say what
> > > object-oriented means." To which the questioner responded, "Well, I
> > > suppose I do, I'm Alan Kay and I invented the term."
> > [snip]
> >
> > The most authoritative-looking account[1] I could find say that it was
> > not Wirth giving the lecture (and that Alan didn't use his name in the
> > retort). Good story, though :slight_smile:
> >
> > [1] http://c2.com/cgi/wiki?HeInventedTheTerm
>
> * "So, this product doesn't support inheritance, right?"
> "that's right"
> * "And it doesn't support polymorphism, right?"
> "that's right"
> * "And it doesn't support encapsulation, right?"
> "that's correct"
> * "So, it doesn't seem to me like it's object-oriented".
>
> That's just seems wrong.
>
> Here's a description of OBJECT-ORIENTED PROGRAMMING IN OBERON-2
> http://www.statlab.uni-heidelberg.de/projects/oberon/kurs/www/Oberon2.OOP.html

First, I don't know how accurately the description on Ward's Wiki
reflects what questions were actually asked of the presenter on
Oberon.

More importantly though, note that it says that it was the original
Oberon, not Oberon 2. Since some language designers like to name
languages with version numbers, or dates, it's easy to confuse say
Oberon with Oberon 2, or Simula with Simula 67.

That's timely - I was just about to make the same correction :wink:

"One important goal for Oberon-2 was to make object-oriented
programming easier without sacrificing the conceptual simplicity of
Oberon. After three years of using Oberon and its experimental
offspring Object Oberon we merged our experiences into a single refined
version of Oberon.
The new features of Oberon-2 are type-bound procedures, read-only
export of variables and record fields, open arrays as pointer base
types, and a with statement with variants. The for statement is
reintroduced after having been eliminated in the step from Modula-2 to
Oberon."

ftp://ftp.inf.ethz.ch/pub/software/Oberon/OberonV4/Docu/Oberon2.Differences.ps.gz

What a unique approach - exclude as many language features as you can -
see what its like and then include the features that you really missed.

Now, getting to the referenced OBERON 2 paper. Thanks for the memory refresh!

It was Oberon 2 which Herr Doctor Wirth presented at the UNC seminar I
mentioned. Having now looked at this paper I remember more of why I
had the reaction I did at the time. He even used the same examples in
the talk IIIRC.

First of all, that paper pretty much takes it upon itself to define
object oriented programming, quoting:

"Object-oriented programming is based on three concepts: data
abstraction, type extension and dynamic binding of a message to the
procedure that implements it. All these concepts are supported by
Oberon-2. We first discuss type extension since this is perhaps the
most important of the three notions, and then turn to type-bound
procedures, which allow data abstraction and dynamic binding."

Now whether or not you think that Alan Kay 'owns' the definition of
OOP (more on that later). I don't think I've seen OOP expressed
exactly like this anywhere else. I guess it's kind of a Humpty Dumpty
in Wonderland technique, which is similar to some of the arguments in
Wirth's paper which prompted this thread.

Why would you expect to see OOP expressed in terms of Oberon-2 features
anywhere else :slight_smile:

Type extension in Oberon 2 is really nothing more than Hoare's "Record
Classes" which predated Oberon. let alone Oberon 2 by over 20 years.
This technique for modeling abstract data types, was the key
difference between Simula and Simula 67, and was also the main element
of C++, which Stroustrup originally viewed as an implementation of
Simula 67 based on C rather than Algol.

Abstract data types are still firmly rooted in the traditional
computational model which separates programs from the data on which
the programs operate. Programs were viewed as boxes with an input
funnel and an ouput chute. You poured in some data, the program
chewed on it, and data spewed out. Some here might be old enough to
remember the old HIPO diagrams (Hierarchy plus Input Process Output).
Which was the new-age (c. 1970) improvement on flowcharts.

Kay's conception of OOP was that each object should be like a small
computer which combined data and program (methods) as an
implementation, and that the computation model should not require
details of the implementation to be known across the interface between
objects. This is a strong form of encapsulation.

http://users.ipa.net/~dwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

In contrast, the encapsulation afforded by abstract data type
languages such as Oberon 2, C++, Eiffel, and to some extent, Java. Is
a kind of pseudo-encapsulation in which the compiler must examine
source code on both sides of an interface. Details of the
implementation are then 'hidden' by making access to them illegal. In
Oberon or C++ encapsulation errors result in compiler, or perhaps
linkage editor, errors, assuming that the programmer hasn't violated
encapsulation with low-level programming tricks. In a language with
Kay-encapsulation, implementation details just aren't visible from
another object.

"pseudo encapsulation" is misleading at best - when an Oberon-2 program
is being compiled, compilation of the import lists causes the /symbol
files/ of imported modules to be read for exported identifiers, the
public interfaces of the imported modules. There's no examination of
source files. The implementation details (whatever that means) of
imported modules were never made public when those modules were
compiled.

What's an "encapsulation error"?
(imo when we talk about compilation or linkage errors we should
acknowledge the other alternative is run time errors.)

In a language with module encapsulation, implementation details just
aren't visible from another module.

Now comparing Oberon 2, as described in this paper, with C++, they
seem very similar, Abstract Data Types, which can be related via type
inheritance, and virtual functions, which have a prototypical
implementation using vtables. Note the description of how dynamic
binding to type-bound functions:

     "A message v.P is implemented as v^.tag^.ProcTab[Index-of-P]. "

This is exactly the same as Stroustrup's implementation of virtual
function calls in C++. It's also an example of the kind of
implementation level information which is needed at compile and link
time, since Index-of-P depends on implementation details of the target
object. It's possible to get a little more flexibility by using more
dynamic techniques, but the implementors of these languages typically
see binding method selectors to an integer in the range 0 to the
number of methods -1 as a nearly irresistable premature optimization.
It's also one of the reasons that systems written in these kind of
languages have long build times, since there is a high degree of
dependency between source files.

Now the new feature of Oberon 2, which prompted MY questioning of
Wirth during the seminar is it Message Records, which make Oberon
slouch a bit towards Smalltalk's computation model, but stumble at the
first step.

Note that the rectangle's message handler is nothing more than a hand
coded typecase statement which discriminates on the message type.
Oberon 2 didn't really introduce late bound messages to the language,
it simply extended with (the typecase statement) to allow for type
extension.

The actual message handling is just a stylized use of this feature.
Unlike Ruby or Smalltalk, message handling isn't a uniform mechanism
at the core of the language, it's just a design pattern.

Languages like Ruby and Smalltalk typically do a much better job of
dispatching such dynamic messages than this kind of implementation
could ever do. Look at the analysis of the performance of messages:

    "- Messages are interpreted by the handler at run time and in
sequential order. This is much slower than the dynamic binding
mechanism of type-bound procedures, which requires only a table lookup
with a constant offset. Message records are much like messages in
Smalltalk [7], which are also interpreted at run time."

   The unstated assumption here is that messages in Oberon 2 is slow
that Smalltalk/Ruby dynamic message sending must be as slow. The
ad-hoc nature of messages precludes the kind of global optimizations
using caches and other techniques typically used by a Smalltalk
implementation. Ruby also does a certain amount of caching to avoid
running the class chain on every call. The Oberon approach can't avoid
running those custom written handler procedures.

I don't know that anyone tried to optimize Oberon-2 message handlers,
although the authors opinion is clear enough:

"In general, type-bound procedures are clearer and type-safe, while
message records are more flexible. One should use type-bound procedures
whenever possible. Message records should only be used where special
flexibility is needed, e.g., for broadcasting a message or for cases
where it is important to add new messages to a type later without
changing the module that declares the type."

This difference between having a powerful, high-level runtime which
can optimize system-wide facilities like message dispatch and garbage
collection is one crucial aspect which distinguishes the
Smalltalk/Ruby and C++/Oberon clans.

"a powerful, high-level runtime" - Oh you mean JVM :slight_smile:

I guess you don't know that one of the big differences between Modula-2
and Oberon was that Oberon had GC: "We assume that retrieval of storage
is performed automatically by a so-called storage reclamation
mechanism, also called garbage collector." Programming in Oberon 1982
p41

http://www.oberon.ethz.ch/WirthPubl/ProgInOberon.pdf

···

On 10/17/06, Isaac Gouy <igouy@yahoo.com> wrote:

And that brings things back to Kay's view of OOP. The key concept in
KayOOP is strong encapsulation which is nearly diametrically opposed
to strong type checking. Message oriented computation, and GC are the
keys to providing that encapsulation. GC might not be so obvious, but
it would be hard to encapsulate interfaces between objects if the
objects needed to look out for each other's survival. In Kay's OOP,
classes and inheritance are secondary, and are there to provide for
factoring the sharing of implementation between objects with similar
implementation, not for modeling types so that procedures can run over
the data.

Whether or not Alan Kay has the right to claim the definition of OOP,
the fact that it got hijacked by Peter Wegner in a paper which
misunderstood the differences in the meanings of classes and
inheritance in the two major groups of languages whihc use those terms
quite differently, is the foundation of the tower of babel which has
confounded conversations about Object Orientation for so many years.

--
Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

Isaac Gouy wrote:

Programming in Oberon 1982

No, Programming in Oberon (2004) - A derivative of Programming in
Modula-2 (1982)

"The Programming Language Oberon" was published in 1988

Isaac Gouy wrote:

Programming in Oberon 1982

No, Programming in Oberon (2004) - A derivative of Programming in
Modula-2 (1982)