Harry can just claim to have already used it that day for an innocuous purpose, like studying or something. Sure, McGonagall could accuse him of stupidity because that leaves him unprepared for an emergency, but pleading guilty to stupidity is easy. (Well, easier, anyway.)
matheist
Don’t be too hasty, whatever you end up deciding! It’s only been a day. A lot of people put a lot of thought into solving this problem, and it makes sense that their attitudes about whether the problem was too easy, or too hard, or whether they solved guessed the author’s solution, or whether it’s unrealistic, would be emotionally enhanced by the effort they spent.
Take a week, take a month, talk to people you trust.
I’m a postdoc in differential geometry, working in pure math (not applied). The word “engineering” in a title of a forum would turn me away and lead me to suspect that the contents were far from my area of expertise. I suspect (low confidence) that many other mathematicians (in non-applied fields) would feel the same way.
There’s also the problem of actually building such a thing.
edit: I should add, the problem of building this particular thing is above and beyond the already difficult problem of building any AGI, let alone a friendly one: how do you make a thing’s utility function correspond to the world and not to its perceptions? All it has immediately available to it is perception.
Let me try to strengthen my objection.
Xia: But the 0, 0, 0, … is enough! You’ve now conceded a case where an endless null output seems very likely, from the perspective of a Solomonoff inductor. Surely at least some cases of death can be treated the same way, as more complicated series that zero in on a null output and then yield a null output.
Rob: There’s no reason to expect AIXI’s whole series of experiences, up to the moment it jumps off a cliff, to look anything like 12, 10, 8, 6, 4. By the time AIXI gets to the cliff, its past observations and rewards will be a hugely complicated mesh of memories. In the past, observed sequences of 0s have always eventually given way to a 1. In the past, punishments have always eventually ceased. It’s exceedingly unlikely that the simplest Turing machine predicting all those intricate ups and downs will then happen to predict eternal, irrevocable 0 after the cliff jump.
Put multiple AIXItI’s in a room together, and give them some sort of input jack to observe each other’s observation/reward sequences. Similarly equip them with cameras and mirrors so that they can see themselves. Maybe it’ll take years, but it seems plausible to me that after enough time, one of them could develop a world-model that contains it as an embodied agent.
I.e. it’s plausible to me that an AIXItI under those circumstances would think: “the turing machines with smallest complexity which generate BOTH my observations of those things over there that walk like me and talk like me AND my own observations and rewards, are the ones that compute me in the same way that they compute those things over there”.
After which point, drop an anvil on one of the machines, let the others plug into it and read a garbage observation/reward sequence. AIXItI thinks, “If I’m computed in the same way that those other machines are computed, and an anvil causes garbage observation and reward, I’d better stay away from anvils”.
It’s really great to see all of these objections addressed in one place. I would have loved to be able to read something like this right after learning about AIXI for the first time.
I’m convinced by most of the answers to Xia’s objections. A quick question:
Yes… but I also think I’m like those other brains. AIXI doesn’t. In fact, since the whole agent AIXI isn’t in AIXI’s hypothesis space — and the whole agent AIXItl isn’t in AIXItl’s hypothesis space — even if two physically identical AIXI-type agents ran into each other, they could never fully understand each other. And neither one could ever draw direct inferences from its twin’s computations to its own computations.
Why couldn’t two identical AIXI-type agents recognize one another to some extent? Stick a camera on the agents, put them in front of mirrors and have them wiggle their actuators, make a smiley face light up whenever they get rewarded. Then put them in a room with each other.
Lots of humans believe themselves to be Cartesian, after all, and manage to generalize from others without too much trouble. “Other humans” isn’t in a typical human’s hypothesis space either — at least not until after a few years of experience.
Agreed about Eliezer thinking similar thoughts. At least, he’s thinking thoughts which seem to me to be similar to those in this post. See Building Phenomenological Bridges (article by Robby based on Eliezer’s facebook discussion).
That article discusses (among other things) how an AI should form hypotheses about the world it inhabits, given its sense perceptions. The idea “consider all and only those worlds which are consistent with an observer having such-and-such perceptions, and then choose among those based on other considerations” is, I think, common to both these posts.
(I haven’t seen the LW co-working chat)
If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you’re free to call them out on it and shame them for it if you want.
I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of “what you said is sexist” as opposed to “you are sexist”.
To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.
Umm.....… but caffeine is also addictive. This seems like a flaw in the plan.
Are you saying you believe this theory? (What’s the evidence?) Or merely that I’m disbelieving it too quickly?
There’s just no reason for it, story-wise. If EY had wanted the distance to Pioneer 11 to relate to Quirrell’s zombie-ness in this way, he would have written the story so that the hard time-travel limit was 4.84 hours, so that it would coincide with the last day of classes. That makes a good story.
But the dates don’t line up, and so there’s no reason to believe that this is anything other than a fun theory.
Very clever idea! But it doesn’t pan out, sadly. I just checked on Wolfram-Alpha. The distance from the earth to Pioneer 11 on the Ides of May, 1992, Quirrell’s presumed last day of class, is actually 4.84 light hours, not 6.
Some experimenting on W-A shows that Pioneer 11 passes 6 light hours around August 25, 1995.
He also spent a long time with the sorting hat.
“Goyle, Gregory!” There was a long, tense moment of silence under the Hat. Almost a minute.
Chapter 9
Hm, that’s a very good point. If Harry is aware of his own ignorance, then he might be willing to accept that there are ways of knowing things like “which spell did the dark lord cast”, without actually knowing himself what those ways are.
In that case — i.e. in the case where Harry is aware of his own ignorance and is aware in that moment — then I have no idea what else the note of confusion could be.
I like the new changes to chapter 7 (I’m not sure how long they’ve been up). The conversation between Harry and Draco flows better, makes more sense for the characters, and the force of the original text is still present.
Two thumbs up!
Yeah, that makes sense. Good call.
I only just realized that Harry must have purchased that Spoon +4 in Diagon Alley, since he’s not capable of wandless magic and we never hear of him using a wand when his spoon is stirring his cereal for him.
Interestingly, I also thought that the green goggles mentioned in the same sentence were a Wizard of Oz shoutout—but they turned out to have an in-story use as well. When will we see bounce boots, knives +3, and forks +2?
Caution, possible spoilers, in the form of comments about the guessability (or lack thereof) of the plot. First quote and second quote.
I always assumed that the note of confusion was, “How could anyone possibly know what spells the dark lord cast, and what the effects were, if there were no survivors besides a baby”.
Ng gigebcrf, rl fnlf, “V gubhtug crbcyr jrer tbvat gb trg “gur cybg” sebz Pu. 1-3, cbffvoyl Pu. 1, naq guvf jnf gur Vyyhfvba bs Genafcnerapl”, naq yngre “Ru, lbh’yy frr jung V’z gnyxvat nobhg nsgre lbh ernq gur svany nep naq gura ernq Puncgre 1 ntnva.”
What would a hypothesis about the end of the story look like which uses only information from chapter 1?
Claim: Harry’s war with Voldemort will destroy the world. Support: In Chapter 1, Petunia says about Lily’s reasons for not making her pretty, “And Lily would tell me no, and make up the most ridiculous excuses, like the world would end if she were nice to her sister, or a centaur told her not to …” Suppose Lily really did say those things, and believed them, and that there was the force of a prophecy behind them. If Lily hadn’t made Petunia pretty, Petunia would not have married Michael Verres, and Harry would not have grown up with science and math and sci-fi (and the attendant humanism) and rationality. A much weaker Harry would have attended Hogwarts, and fought Voldemort, and presumably would have lost. The world would survive, albeit under Voldemort’s thumb.
As a result of Petunia being made pretty, Harry grew up around books that made him strong, strong enough to pose a credible challenge to Voldemort. If they’re evenly matched, and fight to the death, then they take the world down with them.
This feels consistent with the events in the story so far, but it doesn’t really seem that the story is driving towards this conclusion. Except most recently, with the ominous feelings from the various seers following (caused by? who knows) Harry’s ominous resolution in chapter 85.
But it’s all I’ve got for a prediction that’s consistent with the events thus far and is foreshadowed in chapter 1.
When will Harry tell Hermione the truth? I feel like he should insist she learn occlumency first.