These invent-your-own definition problems are always funny. Some of you have
far more twisted minds than I can claim and that entertains Hal Fulton.
Let's get to the crazy definitions. Here's a simple one by Dirk Meijer, shown
through an example script I wrote:
Sleeping for 2 seconds...
Seconds so far: 2.
Sleeping for 1 seconds...
Seconds so far: 3.
Sleeping for -1 seconds...
Seconds so far: 3.
Sleeping for 2 seconds...
Seconds so far: 4.
Notice that the negative sleep seemed to be instant and the sleep that followed
it was shortened by one second. Here's Dirk's code:
alias :old_sleep :sleep
$sleep=0
def sleep(n)
$sleep+=n
if $sleep>0
old_sleep($sleep)
$sleep=0
true
else
$sleep
end
end
Nothing too surprising there. Any call with a positive sleep time adds to a
global $sleep counter, triggers the old sleep() routine, and resets the global
counter to zero. A call to with a negative time just subtracts from the
counter. This allows code to queue up offsets, shortening future calls to
sleep().
For the sake of completeness, here is my test script used above:
#!/usr/bin/env ruby -w
require "insomnia"
$start_time = Time.now
def pause(time)
puts "Sleeping for #{time} seconds..."
sleep(time)
puts "Seconds so far: #{(Time.now - $start_time).round}."
puts
end
[2, 1, -1, 2].each { |sec| pause(sec) }
Let's examine a completely different swing at the target, by Mike Nelson:
module Kernel
def n_sleep(n_sleep_time)
Thread.current.priority = -n_sleep_time
end
end
# test stuff
if __FILE__ == $0
Thread.new { n_sleep(-3); 1.upto(10) {print "A"; sleep(0.1)} }
Thread.new { n_sleep( 1); 1.upto(10) {print "B"; sleep(0.1)} }
Thread.new { n_sleep(-2); 1.upto(10) {print "C"; sleep(0.1)} }
n_sleep(10); 1.upto(10) {print "m"; sleep(0.1)}
loop {break if Thread.list.size == 1}
end
Here the idea is that negative sleep means we should run more often than
positive sleep values. Mike translated that concept to into Thread priorities,
which seems similar in definition. Thus this one line solution sets the Thread
priority to the opposite of the passed value (making negatives fast and
positives slower).
In the test code, four Threads are all running in parallel and sleeping the same
intervals, but the initial change of priority affects how they come up.
Observe:
ACBmACBmACBmACBmACBmACBmACBmACBmACBmACBm
The "A" and "C" Threads always get to go first, because of their higher priority
(negative calls to n_sleep()) and the "m" Thread is always tail-end-charlie.
Those are both clever, but pretty far from the suggested implementation in the
quiz. For an example of that, let's have a peak at Dingsi's code:
class NegativeProc < Proc; end
class ProcStack
def initialize(*args)
@negative = args.shift if args.first == true
@stack = args
end
def + code
new_stack = @stack.dup
if code.is_a? NegativeProc
new_stack.insert(-2, code)
new_stack.unshift(true)
elsif code.respond_to? 'call'
if @negative
new_stack.insert(-3, code)
else
new_stack.push(code)
end
end
ProcStack.new(*new_stack)
end
def call
@stack.each { |p| p.call }
end
def ProcStack.sleep(time)
if time < 0
NegativeProc.new { Kernel.sleep(time.abs) }
else
Proc.new { Kernel.sleep(time) }
end
end
end
class Proc
def + code
ProcStack.new(self) + code
end
end
# ...
Breaking this down, the first line just defines a new type of Proc called
NegativeProc. We will see why in a moment.
The next class is a ProcStack object, for ordering a bunch of Procs to execute.
You can see in initialize() that the ProcStack tracks whether it is @negative
through some optional first parameter and the @stack of Procs/NegativeProcs to
run.
The tricky method is +(). When a Proc/NegativeProc is added, the @stack is
duplicated, adjusted for the new member, and made into a new ProcStack object.
NegativeProcs get added in front of whatever came before them and they set that
@negative parameter we spotted in initialize(). When it something else is
added, the @negative flag causes it to jump in front of the previous Proc and
negative sleep value. (See below for an example.) Otherwise, a Proc is just
pushed onto the stack.
The rest is much easier to take in. call() just triggers each Proc/NegativeProc
in turn. The class method sleep() builds Procs that sleep(). Negative sleep
values build a NegativeProc, so it will trigger the @negative flag dance.
Finally, a +() method is defined on Proc to create the initial ProcStack.
That ProcStack.+() is pretty hard to imagine without an example, so here's the
code Dingsi included to show it of (with minor changes by me):
# ...
# should print something like "chunky ... bacon\hooray for foxes"
STDOUT.sync = 1 # added by JEG2 to flush print() calls immediately
whee = proc { print "bacon\n" } +
ProcStack.sleep(-2) +
proc { print "chunky " } +
ProcStack.sleep(1) +
proc { print "hooray for " } +
proc { print "foxes" }
whee.call
Here's how the ProcStack builds up when that is run:
#<ProcStack @stack=[proc { print "bacon\n" }]> + ProcStack.sleep(-2)
#<ProcStack @stack=[proc { sleep(2) }, proc { print "bacon\n" }]
@negative =true> + proc { print "chunky " }
#<ProcStack @stack=[proc { print "chunky " }, proc { sleep(2) },
proc { print "bacon\n" }]> + ProcStack.sleep(1)
#<ProcStack @stack=[proc { print "chunky " }, proc { sleep(2) },
proc { print "bacon\n" }, proc { sleep(1) }]>
...
What is the practical value of all this? I'm not much sure there is one, save
perhaps showing that it's entirely possible to invent new computational
processes with not too much effort. Well, that and it's just fun stuff to play
with. Ruby Quiz is all about the fun factor.
My thanks to all you crazy time-traveling programmers with insomnia. May you
all sleep(-3) tonight.
There will be no Ruby Quiz this week. I will be busy participating in the
annual ICFP programming contest and I encourage others to give it a shot. See
you all next week!