as they code I notice nested for loops that could have been one matrix multiplication.
This seems like an odd choice for your primary example.
Is the primary concern that a sufficiently smart compiler could take your matrix multiplication and turn it into a vectorized instruction?
Is it only applicable in certain languages then? E.g. do JVM languages typically enable vectorized instruction optimizations?
Is the primary concern that a single matrix multiplication is more maintainable than nested for loops?
Is it only applicable in certain domains then (e.g. machine learning)? Most of my data isn’t modelled as matrices, so would I need some nested for loops anyway to populate a matrix to enable this refactoring?
Is it perhaps worth writing a (short?) top level post with an worked out example of the refactoring you have in mind, and why matrix multiplication would be better than nested for loops?
For something to experience pain, some information needs to exist (e.g. in the mind of the sufferer, informing them that they are experiencing pain). There are known information limits, e.g. https://en.wikipedia.org/wiki/Bekenstein_bound or https://en.wikipedia.org/wiki/Landauer%27s_principle
These limits are related to entropy, space, energy, etc., so if you further assume the universe is finite (or perhaps equivalently, that the malicious agent can only access a finite portion of the universe due to e.g. speed-of-light limits), then there is an upon bound of information possible, which implies an upper bound of pain possible.
Yeah, which I interpret to mean you’d “lose” (where getting $10 is losing and getting $200 is winning). Hence this is not a good strategy to adopt.
99% of the time for me, or for other people?
99% for you (see https://wiki.lesswrong.com/wiki/Least_convenient_possible_world )
More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.
I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).
Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.
Any recommendations for companies that can print and ship the calendar to me?
Okay, but then what would you actually do? Would you leave before the 10 minutes is up?
why do I believe that it’s accuracy for other people (probably mostly psych students) applies to my actions?
Because historically, in this fictional world we’re imagining, when psychologists have said that a device’s accuracy was X%, it turned out to be within 1% of X%, 99% of the time.
I really should get around to signing up for this, but...
Seems like the survey is now closed, so I cannot take the survey at the moment I see the post.
suppose Bob is trying to decide to go left or right at an intersection. In the moments where he is deciding to go either left or right, many nearly identical copies in nearly identical scenarios are created. They are almost entirely all the same, and if one Bob decides to go left, one can assume that 99%+ of Bobs made the same decision.
I don’t think this assumption is true (and thus perhaps you need to put more effort into checking/arguing its true, if the rest of your argument relies on this assumption). In the moments where Bob is trying to decide whether to go either left or right, there is no apriori reason to believe he would choose one side over the other—he’s still deciding.
Bob is composed of particles with quantum properties. For each property, there is no apriori reason to assume that those properties (on average) contribute more strongly to causing Bob to decide to go left vs to go right.
For each quantum property of each particle, an alternate universe is created where that property takes on some value. In a tiny (but still infinite) proportion of these universe, “something weird” happens, like Bob spontaneously disappears, or Bob spontaneously becomes Alice, or the left and right paths disappear leaving Bob stranded, etc. We’ll ignore these possibilities for now.
Of the remaining “normal” universes, the properties of the particles have proceeded in such a way to trigger Bob to think “I should go Left”, and in other “normal” universes, the properties of the particles have proceeded in such a way to trigger Bob to think “I should go Right”. There is no apriori reason to think that the proportion of the first type of universe is higher or lower probability than the proportion of the second type of universe. That is, being maximally ignorant, you’d expect about 50% of Bobs to go left, and 50% to go right.
Going a bit more meta, if MWI is true, then decision theory “doesn’t matter” instrumentally to any particular agent. No matter what arguments you (in this universe) provide for one decision theory being better than another, there exists an alternate universe where you argue for a different decision theory instead.
I see some comments hinting at towards this pseudo-argument, but I don’t think I saw anyone make it explicitly:
Say I replace one neuron in my brain with a little chip that replicates what that neuron would have done. Say I replace two, three, and so on, until my brain is now completely artificial. Am I still conscious, or not? If not, was there a sudden cut-off point where I switched from conscious to not-conscious, or is there a spectrum and I was gradually moving towards less and less conscious as this transformation occurred?
If I am still conscious, what if we remove my artificial brain, put it in a PC case, and just let it execute? Is that not a simulation of me? What if we pause the chips, record each of their exact states, and instantiate those same states in another set of chips with an identical architecture?
If there consciousness is a spectrum instead of a sudden cut off point, how confident are we that “simulations” of the type that you’re claiming are “not” (as in 0) conscious, aren’t actually 0.0001 conscious?
I played the game “blind” (i.e. I avoided reading the comments before playing) and was able to figure it out and beat the game without ever losing my ship. I really enjoyed it. The one part that I felt could have been made a lot clearer was that the “shape” of the mind signals how quickly they move towards your ship; I think I only figured that out around level 3 or so.
I’m not saying this should be discussed on LessWrong or anywhere else.
You might want to lead with that, because there have been some arguments in the last few days that people should repeal the “Don’t talk about politics” rule on a rationality-focused Facebook group, and I thought you were trying to argue for in favor of repealing those rules.
But I’m saying that the impact of this article and broader norm within the rationalsphere made me think in these terms more broadly. There’s a part of me that wishes I’d never read it in the first place.
For some people “talk less about politics” is the right advice, and for other people “talk more about politics” might be the right advice. FWIW, in my experience, a lot of the people I see talking about politics should not be talking about politics (if their goal is to improve their rationality).
From a brief skim (e.g. “A Democratic candidate other than Yang to propose UBI before the second debate”, “Maduro ousted before end of 2019″, “Donald Trump and Xi Jinping to meet in March 2019”, etc.), this seems to be focused on “non-personal” (i.e. global) events, whereas my understanding is the OP is interested in tracking predictions for personal events.
Spreadsheet sounds “good enough” if you’re not sure you even want to commit to doing this.
That said, I’m “mildly interested” in doing this, but I don’t really have inspiration for questions I’d like to make predictions on. I’m not particularly interested in doing predictions about global events and would rather make predictions about personal events. I would like a site that lets me see other people’s personal predictions (really, just their questions they’re prediction an answer to—I don’t care about their actual answers), so that I can try to make the same predictions about my life. So for example, now that I’ve seen you’ve submitted a prediction for “Will my parents die this year?”, I don’t know what your answer is, but I can come up with my own answer to the question of whether or not *my* parents will die this year.
I think this technique only works for one-on-one (or a small group), live interactions. I.e. it doesn’t work well for online writings.
The two components that are important for ensuring this technique is successful is:
1. You should tailor the confusion to the specific person you’re trying to teach.
2. You have to be able to detect when the confusion is doing more damage than good, and abort it if necessary.
Note: I’m not sure if at the beginning of the game, one of the agents [of AlphaStar] is chosen according to the Nash probabilities, or if at each timestep an action is chosen according to the Nash probabilities.
It’s the former. During the video demonstration, the pro player remarked how after losing game 1, in game 2 he went for a strategy that would counter the strategy AlphaStar used in game 1, only to find AlphaStar had used a completely different strategy. The AlphaStar representatives responded saying there’s actually 5 AlphaStar agents that form the Nash Equilibrium, and he played one of them during game 1, and then played a different one during game 2.
And in fact, they didn’t choose the agents by the Nash probabilities. Rather, they did a “best of 5” tournament, and they just had each of the 5 agents play one game. The human player did not know this, and thus could not on the 5th game know ahead of time by process of elimination that there was only 1 remaining agent possible, and thus know what strategy to use to counter it.
I’m assuming you think wireheading is a disastrous outcome for a super intelligent AI to impose on humans. I’m also assuming you think if bacteria somehow became as intelligent as humans, they would also agree that wireheading would be a disastrous outcome for them, despite the fact that wireheading is probably the best solution that can be done given how unsophisticated their brains are. I.e. the best solution for their simple brains would be considered disastrous by our more complex brains.
This suggests the possibility that maybe the best solution that can be applied to human brains would be considered disastrous for a more complex brain imagining that humans somehow became as intelligent as them.
I feel like this game has the opposite problem of 2-4-6. In 2-4-6, it’s very easy to come up with a hypothesis that appear to work with every set of test cases you come up with, and thus become overconfident in your hypothesis.
In your game, I had trouble coming up with any hypothesis that would fit the test cases.