I’ve been toying with this idea, and I’m
having some difficulty implementing it
as well as I’d like.
Seeking the input of others.
I’d like to be able to separate my app’s
“guts” from its UI to a large extent, so
that I could allow different UIs (such
as Fox and “plain old text”).
(advance warning, I ramble on at great length - sorry
Well, I’ve recently been trying to solve this problem for a bunch of
applications that I periodically try to write (they don’t get very far,
as I tend to get distracted by architectural issues like this).
The solution that I’ve ended up with is quite heavy, but it was the best
way that I could think of to completely decouple the application logic
in such a way as to not to enforce any undue restrictions on whatever UI
is being used. I’m sure someone who knows much more about it than I
will point out similarities between this and the MVC pattern.
A lot of the stuff is probably unneccessary, daft, or just plain wrong,
but this is the result of thinking in isolation I’ve been trying to
wrap it all up into a nice library, but it’s not really there yet - one
day it might appear on the RAA as “ruby-applib”
OK, enough procrastination: the details. For starters, I divide the
program up into the UI, the Backend, both of which are managed by an
Application class.
All communication between these is done using Events, which can be sent
to classes implementing an EventTarget interface (basically an event
queue). I’ve mainly been using a threaded EventTarget, which has a
thread running the event loop for simplicity ( I looked at using some
sort of state machine, but I realized that I’d basically be
re-implementing a userspace thread scheduler to make it behave the way
I wanted ).
The Application class is responsible for finding and loading plugins
(more later) and configuration, and setting up the UI and Backend. It
also keeps an eye on them, and, for example knows what to do should one
of them fail unexpectedly.
I decided against sharing large chunks of UI code between the
implementations for various UI toolkits, because they tend to provide
fairly different capabilities, and I’m not a fan of the “minimal common
subset” approach, nor the “emulate missing features” approach, at least
not for a UI.
A UI should be crafted for its target environment.
So, to allow a single backend to talk to the various UI
implementations, and visa versa, I define an interface using events.
There are events that the UI needs to send to the Backend, in order to
instruct it to do something (e.g. Save some file, start a new game,
etc).
There may also be events that the Backend needs to send to the UI (
Remote user has connected, Your CPU is on fire, etc).
For things where progress or completion information is required (e.g.
UI asks Backend to SaveFile), there are a set of “Progress” events -
PercentComplete, Failure, Completed that the Backend will send to the
UI based on how its processing is going in response to the initiating
event. (so UI->Backend(Save), Backend->UI(Failure on Save), ui warns
the user somehow with details from the Failure event).
One of the goals I had that heavily influenced the design was to be
able to have runtime selection of user interfaces, based on user
preference.
So, say, my application has gtk2, gtk1 and tk based user interface code
available, with user preference in that order. The application would
try to create a gtk2 interface, but if that should fail, due to, say,
a missing library, the gtk1 interface would be tried, and so on.
In order to do this, I decided to use a plugin system, so each UI is
wrapped in a Plugin to give us a bit of convenient metadata to work
with. As these plugins are loaded, they register into a heirarchy kind
of like this:
plugins = {
‘ui’ => {
‘graphic’ => {
‘gtk2’ => SomeGtk2UIPlugin,
‘gtk1’ => SomeGtk1UIPlugin
},
‘text’ => {
‘curses’ => SomeCursesUIPlugin
}
}
}
Could probably add things like other/web or something too
These plugins have an ‘instantiate’ method that attempts require the
various libraries and code needed by that particular plugin, and to
return an instance of the class that implements the main UI EventTarget.
Currently the application just runs with the first UI that it can
instantiate, but the user preference stuff shouldn’t be a problem.
So the app looks basically like this:
session.get_some_parameters
session.do_stuff
session.wrap_up
Those aren’t the real names, but you get
the idea.
Very natural in terms of old-fashioned
gets and puts calls.
I guess under my system, the UI would probably put itself into the “get
some parameters” state after it is initialized, while the backend would
be in a wait state, with no evends currently pending.
Once the UI has finished “getting parameters”, it would probably send an
event to the Backend, with a handle to the parameters, the Backend would
then begin “doing stuff”. The “wrap up” logic would probably be in the
Backend, but which module would initiate it would depend on the details
I guess.
And of course, there is a little more
complexity behind the general flow of
control.
My way adds quite a bit more complexity, but I likes it
I suppose I could just make a “run” method
for each class/UI… that seems best right
now. But then what have I really bought for
myself?
Now that’s a good question. The way I’ve described was the only way I
could think of where I could pretty much guarantee the following:
- user interface should never stop responding
(processing done in separate thread)
- core logic should have no knowledge of the UI, nor any
structural requirements forced on it by a particular UI.
- user interface can be selected at runtime based on available
implementations, system libraries and user preference.
I also find it helps me structure my thoughts - you can see fairly
clearly which state each module is in, and what it should be doing.
Anyway, hope that’s useful, not just long
···
On Tue, 15 Jul 2003 12:30:28 +0900 “Hal E. Fulton” hal9000@hypermetrics.com wrote:
–
Stephen Lewis
slewis@paradise.net.nz