A confused model of the self-indication assumption

Imagine that I write a computer program that starts by choosing a random integer W between 0 and 2. It then generates 10^(3W) random simple math problems, numbering each one and placing it in list P. It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.

In this case, being presented with a single math problem tells me nothing about the state of W—I expect it to do that in any case. Similarly, if I subsequently find out that I was shown P(50), that rules out W=0 and makes W=1 1,000 times more likely than W=2.

Given that W represents which world we’re in, each math problem in P represents a unique person, and being presented with a math problem represents experiencing being that person or knowing that that person exists, the self indication assumption says that my model is flawed.

According to the self-indication assumption, my program needs to do an extra step to be a proper representation. After it generates a list of math problems, it needs to then choose a second random number, X, and present me with a math problem only if there’s a math problem numbered X. In this case, being presented with a math problem or not does tell me something about W—I have a much higher chance of getting a math problem if W=2 and a much lower chance if W=0 - and finding out that the one math problem I was presented with was P(50) tells me much more about X than it does about W.

I don’t see why this is a proper representation, or why my first model is flawed, though I suspect it relates to thinking about the issue in terms of specific people rather than any person in the relevant set, and I tend to get lost in the math of the usual discussions. Help?