To be fair, it’s really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don’t believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.
Duncan
It is useful to consider because if AI isn’t safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).
A story for the masses is necessary and this doesn’t appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.
You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/
Quirrelmort would be disgusted with us if we refused to consider ‘cheating’ and would certainly kill us for refusing to ‘cheat’ if that was likely to be extremely helpful.
“Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully.”
Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don’t need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).
With the understanding that I only have a few minutes to check for research data:
http://www.ncbi.nlm.nih.gov/pubmed/1801013
http://www.ncbi.nlm.nih.gov/pubmed/21298068 - “cognitive response … to light at levels as low as 40 lux, is blue-shifted”
Why hasn’t Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.
Why hasn’t Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort’s purposes?
Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without some form of containment. I don’t know how some part of Harry could cause problems, but it seems prudent to eliminate every part of him with Fiendfyre (blood, guts, and all) if that is what Voldemort is aiming for.
Can Fawkes be summoned to extract Harry? If it helps Harry can decide to go to Azkaban.
Harry should be aware that reality is basically doomed to repeat the Atlantis mistake by now (either via AGI or whatever Atlantis unlocked). With the vow that Voldemort made him take he can honestly say that he is the best bet to avoid that fate. That is, Voldemort now needs Harry (and Hermione) to save reality. This seems like the most straight forward method get out of the current annoyance.
Some partial transfiguration options I haven’t seen mentioned:
Colorless / odorless neuro toxins (Harry should have researched these as he is in ‘serious mode’ now that Hermione died). Delivered via the ground directly into each death eater and/or into the air in specific areas.
Nanobots—I can’t recall at this time if this would work or if Harry needs to have the design already in his head. It is possible Atlantis tech. may utilize a vast array of these already.
Transfiguration may allow one to exploit quantum weirdness. Many things can happen at very small scales that could happen at large scales if everything is lined up just so (which never happens in reality, but transfiguration may make possible).
Lucius is both dead and warm. I think he’s dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.
If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you’re limited to a single human’s processing power. In virtually every combat situation one is forced to make decisions faster than one can search the solution space. Only experience compensates for this deficit to any measurable degree. In this situation there are several aspects that Voldemort does not have experience with. If he must spends his cognitive resources considering these aspects and cannot draw from experience it makes mistakes much more likely.
Actually, this isn’t anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is ‘good’, 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result ‘good’).
- 1 Mar 2015 5:45 UTC; 1 point) 's comment on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 by (
I look forward to the results of this study. Quite frankly, most soft science fields could use this sort of scrutiny. I’d also love to see how reproducible the studies done by medical doctors (as opposed to research scientists) are. Quite frankly, even the hard sciences have a lot of publications with problems, however, these erroneous results, especially if they are important to current topics of interest, are relatively quickly discovered since other labs often need to reproduce the results before moving forward.
I would add one caution. Failure to replicate an article’s results does not necessarily mean the results are wrong. It could simply mean the group trying to reproduce the results had any number of other problems.
I am having trouble scanning the HPMoR thread for topics I’m interested in due to both it’s length and the lack of a hierarchical organization by topic. I would appreciate any help with this problem since I do not want to make comments that are simple duplicates of previous comments I failed to notice. With that in mind, is there a discussion forum or some method to scan the HPMoR discussion thread that doesn’t involve a lot of effort? I have not found organizing comments by points to be useful in this respect.
Edit: I’m new and this is my 1st comment. I’ve read a lot of the sequences, but I don’t know my way around yet. It’s quite possible I’m missing a lot about how things work here.
Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.
I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.
I’m glad to hear it is working well and is well received!
Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.
Is there a CFAR webpage that covers this particular workshop and how it went?
My draft attempt at a comment. Please suggest edits before I submit it.:
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don’t believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).
Here are two websites that go into much greater detail about the problem:
AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/
Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
This is false. Just because we do not know what role a lot of DNA performs does not mean it is ‘almost certainly junk’. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases where the DNA isn’t currently ‘in use’ that DNA may be critical to the ongoing stability of our genome over multiple generations or have other unknown functions.
If that is the case then the hat didn’t actually say “it couldn’t tell if Harry had any false memories.” It said it couldn’t detect deleted memories and seems to imply that ‘sophisticated analysis’ of all of his memories for ‘inconsistencies’ would be required to do so. The false memory given to Hermione is at the forefront of her mind and doesn’t require the hat to scan her memories (though Hermione could replay memories of event for the hat presumably). In addition the false memory is entirely out of character with Hermione’s personality which is something the hat, at a minimum, should be able to verify. Considering the quote specifically addressing foreign memory, it seems entirely possibly the hat may immediately detect the false memory for what it is.
“Sorry if it offends you, I just don’t think in general that you should apply this stuff to society. Like… no.”
Let me translate: “You should do what I say because I said so.” This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.