People know how strongly I feel about this, so I risk the
broken-record syndrome whenever I talk about threads, but to your
point: I've always been fascinated by how much programmers seem to
love threads. Programming threads reminds me of playing GO: it takes
an afternoon to learn the rules, but it can take a whole lifetime to
really achieve mastery.
I used to say that there are only two cases in which threads can be a
good thing: 1) If the structure of the problem is inherently threaded,
and 2) if you have an opportunity to capture system latencies like
disk and network I/O.
After reading the article Ara linked, I think my point 1 is another
way of saying: don't introduce nondeterminacy where it doesn't exist
in the problem domain. David, you refer to this (with Lee) as
deterministic concurrency. Threads really can't help solve these
problems. But things like network servers that really are
nondeterministic are fairly easy to program in threads.
My point 2 is something I'm moving away from, now that event-driven
models for programming I/O-heavy applications are coming to the fore.
I'm astonished at how people immediately jump to threads as an easy
abstraction for modeling problems that involve latency but not
nondeterminacy- it's like a reflex response, and it's not a good
I took a look at Alice (an ML derivative) and I loved its concept of
promises and futures, but these work best in a pure-functional context
where you're not worrying about side-effects.
Erlang is a beautiful thing: do have a look at it if you get a chance.
On 9/1/06, David Vallner <firstname.lastname@example.org> wrote:
> Mildly interesting. I think the popularity of using threads comes from
the fact they're a much easier concept to wrap your head around. They
also let you introduce parallelism as an afterthought - dispatching
tasks into a worker thread pool on places where you can hang on IO isn't
really complicated, etc. Deterministic concurrency models require a more
premeditated approach, and more effort to achieve the same levels of
parallelism in a program. You practically have to consider "thread
synchronisation" (e.g. the message passing itself in that model) as a
fundamental design element instead of an edge case to keep from breaking.
Deterministic concurrency might be The Right Thing in being less
error-prone, but sometimes the Hack That Works is more practical. (Cf.
ML-like type system "versus" latent typing.)
This reminds me, I should spend some time in Erlang and Nemerle land.