Mark Probert wrote:
PCCTS was the predecessor of ANTLR.
Actually, it was the toolset that contained the original ANTLR.
When they decided they needed a Java version, they didn't go
multi-lingual - strange decision for a language research group!
Another suite of language tools we used successfully a few years
back was Eli. Afraid I can't remember much about it now, except
there seemed to be too many tools insufficiently well integrated.
I also had fun with TXL, but that's a language-transformation
language not a compiler generator. It's only applicable when the
input and output grammars are identical or can be structurally
fused - very difficult.
... *Ruby syntax extensions*
Forth?
it's a case of back to the future ..
No, I mean syntax extensions - not choosing a language with an
almost complete absence of syntax :-). So you could define useful
and appropriate domain-specific-languages...
Hugh Sasse wrote:
> it's difficult to hold this state in one's, (or is that just my?),
> head
No, I agree. The reason is that the grammar rules are transformed
first into simple rules each containing only *two* items. If you
look into how that transformation is done, you'll know what's
happening for a given grammar. The state is much more obvious then,
because it's defined in terms of these atomic rules.
For example, a rule A ::= B C D gets turned into two rules, either
B_C ::= B C
···
A ::= B_C D
or
C_D ::= C D
A ::= B C_D
depending on precedence. After processing all rules, there will
normally be a number of duplicates, ambiguities and other problems,
so there's about five more stages of checking and massaging until
a parser can be generated. Unfortunately the individuals who are
capable of implementing these processes are so adept at linguistics
that they can't explain the processes in terms that we mortals can
understand :-)... though I'm sure I almost understood it once...
Clifford.