Disclaimer: All my assertions about Ruby, Rails, WEBrick and MySQL are
based on a few months of experience with these products.
David Vallner wrote:
For the client-side logic, you're still stuck to HTML and Javascript,
which are both flawed in their way. While web frameworks help with that,
it's still artificially shifting logic that could as well be done
client-side (validation which only requires checking against the data
the client is already operating on already, paginating content,
formatting data, etc.) I
The app structure I outlined, Firefox+Ruby+Rails+WEBrick+MySQL, all
runs on the client's machine (except for the database server). The
user machine(s) need not be connected to the Internet, though it/they
needs a LAN connection to a database server. So everything except the
data-store is "client-side". Do you agree?
Secondly, the kind of apps I've worked on (and plan to continue working
on) all need robust user interfaces. Raw HTML is no doubt inferior to,
say, Microsoft's MFC or Java's Swing. However, Rails code is run
through Extended Ruby (erb) which programmatically generates repetitive
HTML, so it's pretty nice to use even absent a drag-n-drop GUI
builder.
I'm not aware of any way to easily keep a Ruby-based client
program up-to-date conveniently
Ruby supports Test Driven Development in a very rich way (which I never
did before). Likewise, it supports automated programming
documentation in pretty convenient way (never did that either). Add
the fact that it supports database development with an abstraction that
offers independence from variations of SQL implementations. I'm
optimistic that maintenance will be easier on these projects than any
I've been on before.
or even deliver it cross-platform
easily because of the dependency on native extensions.
Rails is based on Ruby. Ruby is based on C. And C is widely viewed as
a hardware abstraction layer. So that stuff is already ported anywhere
I'm likely to want to go.
I'm not very sure how Rails would be suitable to make desktop
applications, the framework doesn't seem to have a discernable core for
action processing
It implements an MVC architecture in a very neat way. The model
house/enforces the business logic. The views present data from the
models to the client, and pass back client actions as appropriate. The
controllers oversee the traffic between those layers. I haven't done
much of this yet, so it remains for me to see how well that works.
that would be unaware of web idiosyncracies. I might
be wrong.
That's a good point to consider. What sort of idiosynchrasies might
adversely affect this scheme, noting that we're planning for apps on
isolated machines (except for the dataset connection)?
The namedropped (in a snarky way) experience is ad verecundiam, a
logical fallacy.
I didn't intend to offend anyone. I forgot why I mentioned it.
(Unfortunately, I deleted my posted msg). I think I was making it
clear that I've had a lot of experience in a variety of software
development environments. I intended, but failed, to make it clear at
the outset that I was new to Ruby, Rails and web development. I can't
see that constitutes "the fallacy of making an illicit appeal to
authority". But, as you say, it's not worth pursuing.
Ruby is fine if you can juggle the complexity, but it requires more
programmer competence and diligence for the software to be robust - a
quality that grows with importance the closer to end users you get.
Isn't that true regardless of the language?
I'm past my "Oooh, shiny!" Rails phase. Scaffolding is nice, yet in a
way just smoke and mirrors.
Any large application requires decisions about how the code should be
factored. Rails gives you one-liners that puts stuff in the "right"
place so it will be found by other pieces and by the developer.
Wouldn't you credit that as more than smoke and mirrors. Particularly
that they've built in this scheme of "helpers" that help
developer's factor their code that IMHO is a natural way.
ActiveRecord is just way too basic as an ORM
solution and not much simpler to use once enough of its assumptions on
the data model and physical DB schema don't hold.
If you use Active Record Migration, the SQL database schema would
always be in synch, wouldn't it? And don't you value that Rails
automatically maintains a history of migrations so you can easily back
up to an earlier version if necessary; likewise for matching back up
with the program code so long as version control, e.g. Subversion that
Rubyists like.
Convention over
configuration gets hairy in complex deployment scenarios - table names
changing (having to accommodate to a weird naming convention between
prototype and production) mean a recode, and generally the magic becomes
harmful if you need to separate the object model from the tables and
columns. I don't think this applies to your scenario though.
As I understand it, only the database name changes between
development, test and production, not any of the table-names nor any
code except doe some external symbol indicating which one is targeted.
An important service Rails provides, and that I love, is in generating
single and two-way linkages for one-to-many and many-to-many
relationships with one-liners and simple column-name standards for
keys. I've only used one database system that might be an ORM:
Documentum. But all my database past and anticipated needs have been
and I expect will be satisfied E. F. Codd's style of database:
Oracle, SQL Server, DB2 and little old MySQL.
WEBrick is a development server. Probably Good Enough for a one server -
one user scenario, but I don't think it's meant to handle more.
Great: that fits my target scenario.
As for MySQL, I more or less share the attitude of Austin Ziegler on it.
Most of MySQL users are mentally stuck with v3.x and make data schemas
that would make an onion cry.
I've been using ver. 5 and the schemas Rails generates from migrations
seem OK to me. I particularly like the fact that this version supports
transactions .. IMHO a "must" for app relying on database support.
Mentioning "no security risks" if you let
arbitrary clients access your database is debatable though, I thought
you're supposed to keep production data storage behind a firewall.
Giving end-users write-access credentials sounds like way too much
potential for a malicious user to do damage to me, unless you only allow
write access through secured stored procedures, or bend your model
backwards for table-level restrictions to be sufficient. I'll have to
disclaim though that I'm far from a security expert, so the above are to
an extend half-educated guesses.
That's something I have to work on. My first thought is that the
MySQL server has to be on a machine whose physical security is
maintained. Then it needs network authentication for users and user
machines hitting on it. MySQL seems to support user security pretty
well with access levels and encrypted passwords. That encryption
scheme could be used on critically sensitive data if performance allows
it. If the LAN connecting the database to users has any Internet
connection, then the database server machine will need a firewall, too,
as you said.
It's been nice using this thread to think through some of these
issues and having the opportunity to try to "sell" my ideas and
consider the weaknesses they may suffer. So this exercise has been
good for me. Thanks for your enthusiastic participation.
Best wishes,
Richard