The dangers of sleeping

From: Ben Giddings [mailto:bg-rubytalk@infofiend.com]
Sent: Friday, June 11, 2004 9:56 AM

Dale Martenson wrote:
> I guess when I think of sleeping. I don't see the
relationship between
> sleeping for a particular length of time and what the clock
says. This may
> be since I work with a lot embedded systems where tick
counts are used to
> for timing events. Date and time information may or may not
be available.

Right, and there's a vast difference between how things are done on
embedded systems and on full-fledged OSes. Especially when
it comes to
platform-neutral languages like Ruby, they very rarely have access to
low-level system services, like real time clocks.

To be portable, Ruby probably uses system services that are
at as high a
level of abstraction as possible. Since I'm pretty sure there are no
platform-independant ways of doing rtc-style timing, I would
guess Ruby
relies on the system clock.

In eval.c, the routine rb_thread_wait_for(time) is used to handle the sleep
operation.

If there is only one thread or it is marked critical or it is about to be
killed, a combination of 'select(0,0,0,0,&time)' and 'timeofday()' are used.
As long as the select is not interrupted, you are going to get fairly
accurate timings. 'timeofday()' on the other hand is at the mercy of the
system clock.

If more than one thread exists, 'timeofday()' is used.

This explains the difference in execution between the programs in the
original post.

> I just thought it was odd that it wasn't accounted for and
that its behavior
> varied depending on if it was in a thread or not.

Yeah, I guess it is odd, but I'd consider it a corner case.
If you want
to look at how sleep is implemented under the hood in Ruby, you're
welcome to do it. If you can suggest a better way of doing
it, so that
it is able to deal with the system clock getting changed by a huge
amount, that would be great!

I don't see it as a corner case. The same way I don't see leap year as a
corner case. Every Fall, most computers automatically adjust for daylight
savings time. I would expect the software running on my systems to handle
that case.

The C Standard Library defines 'clock()' and CLOCKS_PER_SEC, but they are
not implemented on all systems. On Unix/Linux, you can develop your own tick
count using SIGALRM and an interval timer ('setitimer'). I will end up going
this route.

ยทยทยท

-----Original Message-----

Dale Martenson wrote:

I don't see it as a corner case. The same way I don't see leap year as a
corner case. Every Fall, most computers automatically adjust for daylight
savings time. I would expect the software running on my systems to handle
that case.

Right, but that's the local timezone, not the UTC clock. Every decent OS (and I think these days that may even include Windows) internally uses UTC, and just displays a different time to the user.

gettimeofday() on Linux at least, returns the number of seconds and microseconds since January 1st 1970. This is the value that should never suddenly change, and should never, under any circumstances, go backwards.

It seems pretty reasonable to me to expect this value to stay sane, but if you either know you're on a computer that will have this value changed by big amounts, or you have specialized needs, go right ahead and make a better system, and if you do, please share!

Ben