What's beyond Rails?

What would be really cool would be a way for a server-side Ruby program to instantiate Flash GUIs. (Maybe written as a TK emulation.)

--Peter

···

On Apr 13, 2005, at 4:18 PM, Ezra Zygmuntowicz wrote:

You could take a peek at amf4r. It's on the raa. This lib lets you write your front end in flash and use ruby on the server instead of macromedia's proprietary communication server. amf stands for action message format and its a binary format used to ferry objects back and forth between actionscript and ruby. You can pass just params or arrays or hashes but the coolest thing is you can basically instantiate your server side ruby objects in actionscript and call methods on them from flash. It's pretty cool because of the flexibility of flash interfaces. It's not going to let you write server side photoshop any time soon but its a huge leap forward as far as interfaces go above html.

--
There's neither heaven nor hell, save what we grant ourselves.
There's neither fairness nor justice, save what we grant each other.

Francis Hwang wrote:

For those of us who don't know, care to post a quick explanation of what a post-back is? I Googled it but only got a bunch of ASP.NET pages. And if I'm gonna read about Microsoft APIs, somebody better be paying me.

I take it you either do not use the XmmlHttpRequest object, or are paid to read about it?

:slight_smile:

Actually, from poking around MSDN, I get the sense that postbacks are mix of XMlHtppRequest calls (or the like) and DHTML. Client-side form objects send HTTP posts back to the server, where request handlers reply with some data, and the page is then updated. Remote scripting, again.

James

Here goes.

Postbacks do not, unfortunately, use XMLHttpRequest (comment to James)

Postback allows you to treat form controls similarly to those in desktop apps, in that they appear to maintain 'state'. Say you're on a shipping address page, with a country dropdown and province dropdown. When you change the country dropdown, you want it to load the provinces in that country.

Barring the obvious alternatives in this simple example (AJAX, JS preloading, etc.), the form has to go back to the server to get the new list of provinces, but you want to do this without losing the 'state' of the page (which mostly means the current state/value of all the controls in the page), or force unecessary trips to the database for dynamic data on the page that is not affected (i.e. a shopping cart list). So when you change the country, the 'onchange' event triggers a javascript function that submits the form. ASP.NET will read in this POSTed form, determine what you were trying to do (which I explain below), do it, then send the page back with the state restored, minus the changes caused by your action (new provinces in the dropdown).

ASP.NET is able to maintain state by storing a 'snapshot' of the initial page state (when it was first sent to you based on an HTTP GET). This snapshot is just a hidden input with an encoded, concatenated set of name/value pairs, called the PostBackData. When a control triggers the page to post back, ASP.NET will recognize it is doing so, and will read in that page's PostBackData field. It uses this to determine what it sent to you BEFORE you changed any fields. It then reads in the CURRENT values from the POST variables, allowing it to see what is different (the country field changed). Then, for any control that changed, it raises an event. This is where you can do your magic and tell it to affect the *new* resulting state (in this case different provinces). Right before it sends you the new page, it takes a new snapshot and stores it in the same PostBackData field. Continuity.

The value for this technique is less obvious when you're dealing with simple forms (by this point even firefox remembers text fields and checkbox fields for you). It's more useful when dealing with things that technically are not form fields, such as a table of data that was generated by one of ASP.NET's WebControls (i.e. the shopping cart). If you're changing the country dropdown, you don't want ASP.NET to go and pull the rest of the database-driven data fields all over again (i.e. if you keep a shopping cart list stored in one of these). Saving yourself a database trip is nice, but beside the point. The main idea is that the PostBackData's hidden input value tells ASP.NET what the page looked like BEFORE it posted back. And, since it's looking at this BECAUSE it posted back, it can compare them and take action based on the difference.

I hope I've explained this clearly.

Generally speaking, Postback is a good idea, and it works, but I don't think it's necessarily the most elegant solution to maintaining state. The postback string can get very long, and overall it's a band-aid to a larger problem (statelessness).

Incidentally, there is a postback generator for RoR, by Tobias Leuke, which I have not looked at.

http://wiki.rubyonrails.com/rails/show/Generators

There is a good explanation of ASP.NET's Postback here. This is by far the most straightforward explanation I've ever read (and I've read plenty), and even here it seems like a big hassle to deal with.

http://www.15seconds.com/issue/020102.htm

Matt

Francis Hwang wrote:

···

On Apr 13, 2005, at 4:38 PM, Matt Pelletier wrote:

1. Post-backs
I have wasted plenty of time trying to work around ASP.NET's postback architecture. For simple apps it indeed will ostensibly work like a WinForm. For anything moderately complex, you can get into a mess of issues when designing controls and components. Blech. I think the entire premise of postbacks is flawed (hopefully I'm not naively soured just by MS's implementation). Rather, relying on a postback paradigm to mimic statefulness is flawed.

For those of us who don't know, care to post a quick explanation of what a post-back is? I Googled it but only got a bunch of ASP.NET pages. And if I'm gonna read about Microsoft APIs, somebody better be paying me.

Francis Hwang
http://fhwang.net/

You're missing the architecture. Html+javascript for just the UI layer.
The pixel munging is done on the server in whatever language you
please. Display of results is done by regenerating results on the
server, which the client refreshes.

The html+javascript is just used as a sort of very bare bones 2d scene
graph, like a stripped down gdi or xwindows.

In other words, very similar to google maps.

Like I said, you can think of it abstractly as a spreadsheet where the
client has all the logic for cell interdependancies, but the heavy
calculations for cell re-evaluation is done by posting and getting to
the server.

Peter Suk wrote:

···

On Apr 13, 2005, at 4:18 PM, Ezra Zygmuntowicz wrote:

You could take a peek at amf4r. It's on the raa. This lib lets you write your front end in flash and use ruby on the server instead of macromedia's proprietary communication server. amf stands for action message format and its a binary format used to ferry objects back and forth between actionscript and ruby. You can pass just params or arrays or hashes but the coolest thing is you can basically instantiate your server side ruby objects in actionscript and call methods on them from flash. It's pretty cool because of the flexibility of flash interfaces. It's not going to let you write server side photoshop any time soon but its a huge leap forward as far as interfaces go above html.

What would be really cool would be a way for a server-side Ruby program to instantiate Flash GUIs. (Maybe written as a TK emulation.)

I believe this thread has already mentioned the Alph project. And you could use Laszlo with Ruby.

James

--

http://catapult.rubyforge.com
http://orbjson.rubyforge.com
http://ooo4r.rubyforge.com
http://www.jamesbritt.com

Yes, this would be great!

A while back I did a similar thing, using Tcl (with a Tk emulation
layer) on the server to create and drive a Java applet GUI. Worked very
well for what it did, but would be nicer to see the client piece done in
Flash instead of Java for all kinds of reasons.

For the curious, there's an old paper about it (the work itself wasn't
open source) at http://www.markroseman.com/pubs/proxytk.pdf\.

Mark

···

Peter Suk <peter.kwangjun.suk@mac.com> wrote:

What would be really cool would be a way for a server-side Ruby program
to instantiate Flash GUIs. (Maybe written as a TK emulation.)

Actually, from poking around MSDN, I get the sense that
postbacks are mix of XMlHtppRequest calls (or the like)
and DHTML.

Microsoft does have some docs on remote scripting, but "postback" in ASP.NET
terminology has nothing to do with Ajax technologies.

In ASP.NET, "postback" means someone GETs a page and then POSTs the form
back to the same page. It simply means the user has POSTed back to the same
page they were just viewing.

The purpose (or perhaps just a side effect), is that ASP.NET can then
serialize the page object's state into a hidden form field behind the
scenes.

1) A user GETs FooPage.aspx, an HTML file with ASP.NET markup, that also has
a FooPage.cs "codebehind" .NET class with relevent business/display logic,
etc.

2) FooPage.cs does some stuff and can put anything it wants to keep/persist
into this ViewState hash table

3) FooPage.html is written out to the browser - and the ViewState hash table
is base64 encoded into a hidden HTML form field

4) User does some stuff and hits submit to POST back into FooPage.html

5) ASP.NET sees the hidden HTML form field, and says "oh, this is a
postback!", and deserializes it into the ViewState for the request's
instance of FooPage.cs

So, summarizing, postback allows ASP.NET to have this magic ViewState hash
table that persists across requests, not via a session, but by riding along
in the HTML.

ASP.NET developers frequently do "super.isPostBack()" (or is it
"base.isPostBack()"? -- too many languages) calls in their Page_OnLoad
method to tell whether their ViewState is brand new and they should do
initialization and put stuff in it, or if it already has stuff in it from
the previous request.

Also, just FYI, on the few ASP.NET projects I've done I've found the
ViewState/postback stuff to be pretty slick. Its pretty well hidden behind
the scenes. And from what I've seen, successfully leveraged by a whole lot
of their built-in ASP.NET controls/widgets/etc. That being said, I didn't do
anything too complicated, so given the original posters frustrations with
postback, it seems like YMMV.

- Stephen

Matt Pelletier wrote:

Here goes.

Postbacks do not, unfortunately, use XMLHttpRequest (comment to James)

Ah.

...
I hope I've explained this clearly.

Yes. Yow.

Generally speaking, Postback is a good idea, and it works, but I don't think it's necessarily the most elegant solution to maintaining state. The postback string can get very long, and overall it's a band-aid to a larger problem (statelessness).

Indeed, and it seems the sort of thing that encourages poor Web app design because it masks certain difficulties.

There is a good explanation of ASP.NET's Postback here. This is by far the most straightforward explanation I've ever read (and I've read plenty), and even here it seems like a big hassle to deal with.

http://www.15seconds.com/issue/020102.htm

Thanks for the explanation.

James

Intriguing. But leaving aside the issue of whether it would be economically worth it to offer such software as a web service, I wonder how good your bandwidth would have to be for this to be sensible with high-definition images.

For example. I've got Photoshop files on my machine that easily top 100 MB in size, but let's just 100 MB as a representative sample. Let's say I've got that file living in this web app, and I run a one-pixel gaussian blur on the whole thing. The web client issues the command to the server, the server churns on this for a few seconds, and sends the whole image down the pipe again ... My cable modem at home has a peak downstream of 5 Mpbs, so it's going to take 20 seconds for me to get the new image -- and that's assuming the app doesn't have to share the pipe with any other program like BitTorrent or Acquisition, or, hell, my mail program, which checks my mail every 5 minutes.

Now, you can try to run some lossless compression on the file to try to get it down to, say, 10 seconds, but that makes everything more complex and, oh, sometimes you've got high-entropy images so you've gone to all the trouble for very little gain.

You can also try to serve only the view of the image that the user needs right that instant, but that doesn't so much kill the lag as spread it around. The monitor I use at home, for example, is a 1280x960 resolution with 24-bit color, which means, if my math is right, that filling my screen requires 3.5 MB worth of pixels. It takes me 0.7 seconds to download that much data if my cable modem's doing well. That doesn't sound like so much until you realize you have to have that wait every single time you zoom in or out, or scroll up or down, or even change the background color from white to transparent. This is similar to the problem faced by online FPS engines: Since you can't rely on subsecond response times when you're playing CounterStrike over the network, the server has to give the client more information than strictly necessary and trust it to do what's right until the next time contact is established.

And by the way, if you're dealing with 24-bit images on an app driven by HTTP+JavaScript, how good is the color fidelity going to be?

I'm happy to do plenty of things on the web, but image processing ain't one of them.

Francis Hwang

···

On Apr 15, 2005, at 7:19 PM, jason_watkins@pobox.com wrote:

You're missing the architecture. Html+javascript for just the UI layer.
The pixel munging is done on the server in whatever language you
please. Display of results is done by regenerating results on the
server, which the client refreshes.

The html+javascript is just used as a sort of very bare bones 2d scene
graph, like a stripped down gdi or xwindows.

In other words, very similar to google maps.

Like I said, you can think of it abstractly as a spreadsheet where the
client has all the logic for cell interdependancies, but the heavy
calculations for cell re-evaluation is done by posting and getting to
the server.

Stephen Haberman wrote:

Actually, from poking around MSDN, I get the sense that
postbacks are mix of XMlHtppRequest calls (or the like)
and DHTML.

Microsoft does have some docs on remote scripting, but "postback" in ASP.NET
terminology has nothing to do with Ajax technologies.

In ASP.NET, "postback" means someone GETs a page and then POSTs the form
back to the same page. It simply means the user has POSTed back to the same
page they were just viewing.

The purpose (or perhaps just a side effect), is that ASP.NET can then
serialize the page object's state into a hidden form field behind the
scenes.

<snip/>

Thanks for the explanation. This is handy to know if I ever get around to more C#/ASP.net coding.

James

The web client issues the command to the server, the server churns on
this for a few seconds, and sends the
whole image down the pipe again ...
<<<

Still the wrong model. If you were providing Photoshop as a web
application, presumibly you'd be providing storage on the server
itself.

So after your guassian blur filter runs, the app only need to send you
back a ~1000x1000 pixel image that reflects the portion of the image in
display.

If you've worked Satori you can understand how non-destructive editing
with a transaction log works. Your interactions create a log of actions
to complete, and the view is computed on demand... but the entire
calculation need not be done at the full resolution for the entire
file. Logging the actions and generating a preview is sufficient. After
editing is complete and it comes time to save results, then you can
render the transactions at the output resolution.

That
doesn't sound like so much until you realize you have to have that wait
every single time you zoom in or out, or scroll up or down, or even
change the background color from white to transparent.
<<<

Once again, there's no reason to assume our client is brainless. Much
like google maps maintains a tile cache, we could maintain a cache of
layered tilings. Of course it's not going to be as responsive as
communication on a local machine.

My point is merely that http itself is not the limitation.

This is similar

to the problem faced by online FPS engines: Since you can't rely on
subsecond response times when you're playing CounterStrike over the
network, the server has to give the client more information than
strictly necessary and trust it to do what's right until the next time
contact is established.
<<<

While that's true for modems, it's not necessarily true once you get to
broadband. This is a complex topic with a lot of details. I happen to
have a friend who works professionally in this area. I've played
versions of quake3 that remove all prediction and run at a locked 60hz
rate both for client side graphics and network state update. Latency
isn't necessarily the same as throughput, and human beings are
surprisingly tolerant of latency. You can hit up citeseer for the
relevant basic research done by the .gov in the initial military
simulation days: the net result is that humans can tolerate as much as
150ms of local lag, and that at around 50ms of local lag human
observers begin to have trouble distinguishing between lagged and
unlagged input.

50ms round trip is possible on broadband these days, though not assured
coast to coast in the US or anything.

Anyhow, the point is not weither it's a good idea to do PS as a web
app. I think it's a miserible idea. The point is that html+javascript
rendering is sufficiently fast to serve as the basic drawing layer of a
GUI abstraction. Of course something purpose designed for it would be
better, but with the possible exception of flash, I don't think
anything is going to get enough market penetration: html+javascript are
good enough (barely).

Still the wrong model. If you were providing Photoshop as a web
application, presumibly you'd be providing storage on the server
itself.

So after your guassian blur filter runs, the app only need to send you
back a ~1000x1000 pixel image that reflects the portion of the image in
display.

Okay, but that portion itself represents a significant enough time lag to cause a serious problem for the person who relies on Photoshop to get work done. A 1000x1000 image, at 24-bit color, is about a 2.9 MB image. My downstream is 5 Mbps which means that under peak conditions it takes about 0.48 seconds to download the view. Not the entire image, just the view itself.

Also, 1 million pixels is a fairly conservative estimate for the size of the view you have to deal with. On a full-screen image on my paltry 17" monitor that's about what I get. Apple's 30-inch cinema display gives you 4 times that many pixels.

I guess the point I'm trying to make is that although it's easy for me to see certain apps move to web-land, Photoshop isn't one of them. When you're a serious Photoshop user, everything you do sloshes around a lot of data, so the network itself becomes an obstacle, and until you solve the last-mile bandwidth problem you can't deliver something like this as a web app to the home user.

This is similar

to the problem faced by online FPS engines: Since you can't rely on
subsecond response times when you're playing CounterStrike over the
network, the server has to give the client more information than
strictly necessary and trust it to do what's right until the next time
contact is established.
<<<

While that's true for modems, it's not necessarily true once you get to
broadband. This is a complex topic with a lot of details. I happen to
have a friend who works professionally in this area. I've played
versions of quake3 that remove all prediction and run at a locked 60hz
rate both for client side graphics and network state update. Latency
isn't necessarily the same as throughput, and human beings are
surprisingly tolerant of latency. You can hit up citeseer for the
relevant basic research done by the .gov in the initial military
simulation days: the net result is that humans can tolerate as much as
150ms of local lag, and that at around 50ms of local lag human
observers begin to have trouble distinguishing between lagged and
unlagged input.

50ms round trip is possible on broadband these days, though not assured
coast to coast in the US or anything.

Maybe you know more about this than I do, but how much data does a FPS have to send out and receive, anyway? It's been my impression that a FPS server sets a lot of the original model when the game sets up, and then sends the clients a continuous stream of location and status updates. I imagine the size of these updates is measurable in bytes or kilobytes.

Anyhow, the point is not weither it's a good idea to do PS as a web
app. I think it's a miserible idea. The point is that html+javascript
rendering is sufficiently fast to serve as the basic drawing layer of a
GUI abstraction. Of course something purpose designed for it would be
better, but with the possible exception of flash, I don't think
anything is going to get enough market penetration: html+javascript are
good enough (barely).

Yeah, I agree with you there. Applications like Google Maps make the case for a lot of really astounding web apps, possible in the near-term.

Francis Hwang

···

On Apr 16, 2005, at 2:49 AM, jason_watkins@pobox.com wrote:

> Still the wrong model. If you were providing Photoshop as a web
> application, presumibly you'd be providing storage on the server
> itself.
>
> So after your guassian blur filter runs, the app only need to send you
> back a ~1000x1000 pixel image that reflects the portion of the image in
> display.

Okay, but that portion itself represents a significant enough time lag
to cause a serious problem for the person who relies on Photoshop to
get work done. A 1000x1000 image, at 24-bit color, is about a 2.9 MB
image. My downstream is 5 Mbps which means that under peak conditions
it takes about 0.48 seconds to download the view. Not the entire image,
just the view itself.

That's 5 mega bits, not bytes right? So more like 4.6 seconds,
at full speed?

With the right kind of compression, could send a lot less data.

[...]

>>>> This is similar
> to the problem faced by online FPS engines: Since you can't rely on
> subsecond response times when you're playing CounterStrike over the
> network, the server has to give the client more information than
> strictly necessary and trust it to do what's right until the next time
> contact is established.
> <<<
>
> While that's true for modems, it's not necessarily true once you get to
> broadband. This is a complex topic with a lot of details. I happen to
> have a friend who works professionally in this area. I've played
> versions of quake3 that remove all prediction and run at a locked 60hz
> rate both for client side graphics and network state update. Latency
> isn't necessarily the same as throughput, and human beings are
> surprisingly tolerant of latency. You can hit up citeseer for the
> relevant basic research done by the .gov in the initial military
> simulation days: the net result is that humans can tolerate as much as
> 150ms of local lag, and that at around 50ms of local lag human
> observers begin to have trouble distinguishing between lagged and
> unlagged input.
>
> 50ms round trip is possible on broadband these days, though not assured
> coast to coast in the US or anything.

Regarding, "and human beings are surprisingly tolerant of latency",
it's hard to express how funny this is in the context of games like
quake. On the one hand it's true the game is quite playable with
and order-of-magnitude difference in latency between players. But,
oh, the griping, whining, and complaining that ensues! :slight_smile:

Here are the round-trip ping times for everyone connected to my
quake2 server at the moment (in milliseconds):

    29 Raditz
    43 ompa
    56 Ponzicar
    58 SKULL CRUSHER
    59 [EF] 1NUT
    66 p33p33
    67 HyDr0
    70 HamBurGeR
    74 iJaD
    93 Lucky{MOD}
   140 ToxicMonkey^MZC
   145 Krazy
   164 nastyman
   171 Bellial
   242 mcdougall_2
   372 Demon

In a game like quake, you can feel the difference in latency
all the way down to LAN speeds. A couple players, not playing
at the moment, live near the server and even have sub-20
millisecond pings! Which is pretty close to LAN, but
experienced players still report having to adjust their play
style between 20 msec (lives near the server) and <10 msec
(LAN play) latency.

[...] how much data does a FPS
have to send out and receive, anyway? It's been my impression that a
FPS server sets a lot of the original model when the game sets up, and
then sends the clients a continuous stream of location and status
updates. I imagine the size of these updates is measurable in bytes or
kilobytes.

Quake2 data rate for broadband players is in the range of
8 to 15 kbytes per second.

... In a desperate attempt to say something on-topic:
I use ruby to write all my quake2 admin scripts.
I haven't tried Rails yet. The other night I was able
to write a server-status CGI script from scratch in a
couple hours, including spawning multiple threads to
poll all servers concurrently. (So amazingly easy in
ruby.)

I considered trying Rails but had no need for a
database to be involved. . . . Someday I'll try Rails
- maybe to keep track of all players high scores
across all game levels.

OK not quite on-topic... I tried <grin>

Regards,

Bill

···

From: "Francis Hwang" <sera@fhwang.net>

On Apr 16, 2005, at 2:49 AM, jason_watkins@pobox.com wrote:

Maybe you know more about this than I do, but how much data does a FPS
have to send out and receive, anyway?
<<<

Well, the theoretical limit is always just input itself. Mose skew is
often within single byte values, but even if you bumped it up to 16bit
in x,y you're at 4 bits per sample. Within the sample we might have an
upper bound of 10 key up/down events, so figure something like 16 bytes
per input timeslice. The server can simply act as a relay for this
data.

So the lower limit is something like 16 * numplayers * hz. In other
words, you pay more for UPD packet headers.

For reason of cheat protection, few games use this model however (chris
taylor is the only person I know who's a real big fan of it).

It's more common for games to instead simulate on the server and then
stream the state onto the client. The quake engines themsevles went
back and forth on the details of this, but eventually settled on simply
streaming serially id'd state deltas from the server which the client
acknowledges. The server stores a sync point for each client, and
journals the simulation state from the point of last acknowledgement.
Each client can have it's own rate, and the server sends a cummulative
update from the sync point to the current server state. The real cost
of this model is memory, which is why quake3 won't scale to 100's of
players. There's also disadvantages to it's failure mode: the size of a
future packet grows as packets are dropped. But, a client can get back
in sync with a single such packet, so it proves itself fairly robust.

If you read the literature, you can see the quake3 model + the client
extrapolation code is very close to a well known optimistic discrete
event simulation algorithm (timewarp).

Interestingly, in the many models the guy I know has played with, one
that works the best is instead bucketing state slices to be the same
for all clients, and echos the previous state with each state it sends.
Ie sending A, AB, BC, CD. Single packet drops are almost never noticed.
Loss of sync greater than a single slice are handled out of band by the
same code that handles a player jumping into the middle of a simulation
state. It's far less memory demanding, so it scales quite well, and in
practice ends up being quite competative with the quake3 model.

It's been a while since I looked at the actual rates, but you I do
recall that when I'd lock up counterstrike to send input at 60hz and
request server frames at 60hz it'd end up at around 2kbyte/sec download
nominal, with bursts up to mabye 6kbyte/sec in rare moments. Input
upload is always miniscule.

First person shooters are _always_ on topic :slight_smile:

Yours,

tom

···

On Sun, 2005-04-17 at 02:35 +0900, Bill Kelly wrote:

... In a desperate attempt to say something on-topic:
I use ruby to write all my quake2 admin scripts.

Regarding, "and human beings are surprisingly tolerant of latency",
it's hard to express how funny this is in the context of games like
quake.
<<<

Well, don't get me started, but quake's net code has some signifgant
implimentation and algorithmic issues, particularly quake3's prediction
handling. One of the things I took away from the darpa papers I skimmed
is that what humans really sense isn't just the absolute latency, the
variance matters as well. If you give someone a consistant local lag
they stop noticing it quite quickly. But if you factor in prediction to
the control system it starts becoming complicated quickly. Suddenly the
delay factor the brain has to anticpate is changing depending on
several different factors that are outside your awareness.

Someone used to have some java apps up where you could play with this
stuff yourself. It's quite interesting... a similar experience to doing
blind testing of audio equipment... a little uncomfortible when you
have some of your assumptions challanged.

Well, don't get me started, but quake's net code has some signifgant
implimentation and algorithmic issues, particularly quake3's prediction
handling. One of the things I took away from the darpa papers I skimmed
is that what humans really sense isn't just the absolute latency, the
variance matters as well. If you give someone a consistant local lag
they stop noticing it quite quickly. But if you factor in prediction to
the control system it starts becoming complicated quickly. Suddenly the
delay factor the brain has to anticpate is changing depending on
several different factors that are outside your awareness.

Someone used to have some java apps up where you could play with this
stuff yourself. It's quite interesting... a similar experience to doing
blind testing of audio equipment... a little uncomfortible when you
have some of your assumptions challanged.

I haven't read the darpa papers; sounds interesting. However,
I don't think that the hypothesis that humans can get used to
a consistent local lag--which I completely agree with--tells
the whole story in a hyper real-time environment like quake.

Quake2 seems to do a good job providing a consistent local lag,
provided there's no packet loss. (Q2 doesn't handle packet
loss very smoothly.) I've played quake2 online pretty much
daily for seven years. The part of the story I think isn't
being told by humans being able to stop noticing consistent
local lag, is that the more latency you have--even if you're
used to it--the harder it is to beat players with low latency
connections.

It's fun when we get to see a player who's been on dialup for
seven years finally switch to broadband. It's rare to have
such an extreme example as we had with a player a couple
months ago. He was very skilled, and it turned out he lives
near the server. So he'd been playing quake2 on dialup for
seven years, was utterly adjusted to compensating for the
dialup lag (200+ milliseconds). He was skilled enough to
sometimes beat good players with low-latency connections.
Recently he got DSL, and has one of the lowest pings on the
server now (about 20 msec.) It took him a few weeks to
adjust. Now he just cleans up - he's now one of the top few
players on the server (each of whom has exceptional skills
and low latency.)

It's rare to get such a pure demonstration of how latency
is a handicap to even the most skilled players who've had
years to adjust to it. Our brains may do well at adjusting
to lag so that we stop noticing it, but we don't seem to
be able to truly compensate for it in a way that puts us on
an even playing field against non-lagged players.

(Ob. ruby: Here's a routine that'll work from the command
line or as a require'd file that will query a quake2 server
for its status - including the ping times of all the players
connected, or send remote console commands to the server if
you have the rcon password.
http://tastyspleen.net/quake/servers/q2cmd-rb.txt
E.g. q2cmd tastyspleen.net 27910 status )

Regards,

Bill

···

From: <jason_watkins@pobox.com>