I'm not sure it will be superseded any time soon, objects being a very
natural way for people to think about real-world problems.
It may be natural for some people, but I have to force myself into
trying to think in an OO way. I think procedurally, but that may well be
a result of too much Fortran, Basic and assembler in my formative years.
It seems very natural to me to say "Here's some data; perform this
process on it". I like to think of data as analogous to nouns, and
procedures as analogous to verbs. So "process(data)" seems like English
to me, very natural.
Most of my programs have this form:
- Input some data
- Process it
- Output the result
but that might just be the sort of programming I do (engineering and web
mostly). In an OO language like Java or Ruby I tend to do this sort of
thing:
All system-level engines are written in C, not C++.
...
It compiles for everything from wristwatches to Mars Rovers.
This is because object-orientation is unnecessary. Any job you can do
with it, you can do without it.
Those of us who have worked on large programs in Fortran, C and
assembler know that it's easy to write excellent software in an
imperative style, as you long as you have the discipline to structure
your data and program code sensibly.
Very true, although a large program in assembler is often a much smaller program when rephrased in C and likewise when the C is rephrased in C++. That's the primary win with OO - it reduces the volume of code and hence eases the strain of remembering that code in detail.
Close but not really. The primary purpose of OO is that it provide a
higher level of abstraction than C does. One that can get closer to the problem being solved than C does in natural use.
This >can< result in less code, but the main win is that it results in a better mapping of the program to the problem, and a better understanding of the program by the programmers.
Naturally, all of this assumes you know what you are doing
···
On 11 Jul 2008, at 12:12, Dave Bass wrote:
OO is the latest fashion, but something else will come along soon, and
we'll all be deprecating OO.
I'm not sure it will be superseded any time soon, objects being a very natural way for people to think about real-world problems.
And what's worse, for every older C++ project you need appropriate
vintage of GCC (or whatever compiler the authors used). There are both
binary-level and source-level differences between different major (and
even minor) versions of GCC. This is because the standard was (is?)
relatively rapidly evolving, and the compilers could not pick up the
changes and fix all deficiencies, or conform in all areas that were
newly standardised immediately.
Which is why Ruby's source is compatible with K&R C - before even prototypes.
There actually was a time when even the use of C was in doubt for embedded systems. An awful lot of Forth, Pascal, PL/M and assembler code was written before C compilers were good enough. Now, of course, the C compilers are good enough to build a high-performance Forth environment in C.
Heretic ;p
I'm not the heretic -- the heretics are the people who added "labels as values" to gcc, and the gForth team that exploited it.
Close but not really. The primary purpose of OO is that it provide a
higher level of abstraction than C does. One that can get closer to the problem being solved than C does in natural use.
This >can< result in less code, but the main win is that it results in a better mapping of the program to the problem, and a better understanding of the program by the programmers.
You describe an "Object Based" system.
The main win of an OOP is virtual methods have language support, reducing the odds of a misfire when you override a method. The alternative is messy function pointers that might point to garbage, or NULL.
Overriding methods, in turn, allow old code to call new code, so you can easily change a program by adding to its lists of concrete classes, while leaving the abstract classes alone. The "Open Closed Principle".
This benefit makes code harder to read and understand, not easier.
On 11 Jul 2008, at 16:10, M. Edward (Ed) Borasky wrote:
Eleanor McHugh wrote:
There actually was a time when even the use of C was in doubt for embedded systems. An awful lot of Forth, Pascal, PL/M and assembler code was written before C compilers were good enough. Now, of course, the C compilers are good enough to build a high-performance Forth environment in C.
Heretic ;p
I'm not the heretic -- the heretics are the people who added "labels as values" to gcc, and the gForth team that exploited it.