Very interesting!
My wife and I have been talking about a “dinner suggestion” program and an
“activity suggestion” program with the same requirements. I never gave much
thought to the math behind it, though; I figured that part would be obvious.
I’m pleased to find that it isn’t!
Hmmm…
I think your gut is right, Hal: under sufficiently skewed weights, items
must repeat consecutively, or the distribution won’t follow your prescribed
probabilities. So just what are the conditions?
Well, (and I’m going to switch to float probabilities for now), if anything
is weighted more than 0.5, it will have to repeat. Assuming we absolutely
want to prevent repeats (which Katy and I do, and I’m supposing you do, too,
since this feels like a rather “psychological” algorithm), then something
with probability 0.5 will have to occur at every other spot. Weird. No
that doesn’t feel right.
Let’s try something else, which I know won’t work, but seeing how it won’t
work might be enlightening. It’s the bone-headed approach. Let’s say
that at each step, we use the algorithm from your last email (provided by
Tabby or something, right?), but we retry if we get the same thing we got
last time. (This ought to break the distribution requirement.)
So if we have items a => 0.5, b => 0.25, c => 0.25, what happens? (Anyone
remember Markov chains? Let me see if I can look it up… I give up; I’ll
just make it up as I go along…)
So what are the long term probabilities of a, b, and c? We’ll call them pa,
pb, and pc. I will also let pa(c) refer to the probability of getting an
a', given that we just had a
c’.
So:
pa = (pa(a) * pa) + (pa(b) * pb) + (pa(c) * pc)
pb =
pc =
But since pa(a) is 0 (we allow no repeats), this simplifies to:
pa = (pb * 0.6667) + (pc * 0.6667)
pb = (pa * 0.5 ) + (pc * 0.3333)
pc = (pa * 0.5 ) + (pb * 0.3333)
Which looks right, since the pa coefficients sum to 1, as with the pb and pc
coefficients. I guess that’s the stochastic matrix:
0, 0.5, 0.5 |
.67, 0, .33 |
.67, .33, 0 |
(Or are the columns supposed to add to 1? I don’t know… I think that if
that matrix is M, then (pa,pb,pc) * M = (pa,pb,pc)… or something. I’ll
just solve it the old-fashioned way.
pb = pc by symmetry (just like i and -i, Daniel; just like true and false,
Phil :), so…
pa = 1.333 * pb
Since pa + pb + pc = 1,
pa = 0.4
pb = 0.3
pc = 0.3
Well, that wasn’t terribly enlightening, was it?
Still, if your weights were 4 and 3 and 3, you’d know what to do! Just
apply the bone-headed approach with weights 2 and 1 and 1. So in general,
how do you do what we just did backwards? Probably something about
inverting the stochastic matrix… If you can’t tell, I slept through the
vast majority of linear algebra. (You’d think I’d be kicking myself right
about now, but I really needed that sleep…)
Also, what are the restrictions on the initial weights now?
I’ll hunt around for my old linear algebra book,
Chris