How’d this go? Just searched LW for “neurofeedback” since I recently learned about it
Jack R(Jack Ryan)
Berkeley group house, spots open
That argument makes sense, thanks
We are very likely not going to miss out on alignment by a 2x productivity boost, that’s not how things end up in the real world. We’ll either solve alignment or miss by a factor of >10x.
Why is this true?
the genome can’t directly make us afraid of death
It’s not necessarily direct, but in case you aren’t aware of it, prepared learning is a relevant phenomenon,since apparently the genome does predispose us to certain fears
Seems like this guy has already started trying to use GPT-3 in a videogame: GPT3 AI Game Prototype
[Linkpost] Can lab-grown brains become conscious?
Not sure if it was clear, but the reason I asked was because it seems like if you think the fraction changes significantly before AGI, then the claim that Thane quotes in the top-level comment wouldn’t be true.
Don’t timelines change your views on takeoff speeds? If not, what’s an example piece of evidence that updates your timelines but not your takeoff speeds?
Same—also interested if John was assuming that the fraction of deployment labor that is automated changes negligibly over time pre-AGI.
Humans can change their action patterns on a dime, inspired by philosophical arguments, convinced by logic, indoctrinated by political or religious rhetoric, or plainly because they’re forced to.
I’d add that action patterns can change for reasons other than logical/deliberative ones. For example, adapting to a new culture means you might adopt and have new reactions to objects, gestures, etc that are considered symbolic in that culture.
so the edge is terminal
Earlier you said that the blue edges were terminal edges.
What are some of the “various things” you have in mind here? It seems possible to me that something like “AI alignment testing” is straightforwardly upstream of what players want, but maybe you were thinking of something else
“Go with your gut” [...] [is] insensitive to circumstance.
People’s guts seem very sensitive to circumstance, especially compared to commitments.
But the capabilities of neural networks are currently advancing much faster than our ability to understand how they work or interpret their cognition;
Naively, you might think that as opacity increases, trust in systems decreases, and hence something like “willingness to deploy” decreases.
How good of an argument does this seem to you against the hypothesis that “capabilities will grow faster than alignment”? I’m viewing the quoted sentence as an argument for the hypothesis.
Some initial thoughts:A highly capable system doesn’t necessarily need to be deployed by humans to disempower humans, meaning “deployment” is not necessarily a good concept to use here
On the other hand, deployability of systems increases investment in AI (how much?), meaning that increasing opacity might in some sense decreases future capabilities compared to counterfactuals where the AI was less opaque
I don’t know how much willingness to deploy really decreases from increased opacity, if at all
Opacity can be thought of as the inability to predict behavior in a given new environment. As models have scaled, the number of benchmarks we test them on also seems to have scaled, which does help us understand their behavior. So perhaps the measure that’s actually important is the “difference between tested behavior and deployed behavior” and it’s unclear to me what this metric looks like over time. [ETA: it feels obvious that our understanding of AI’s deployed behavior has worsened, but I want to be more specific and sure about that]
I was thinking of the possibility of affecting decision-making, either directly by rising the ranks (not very likely) or indirectly by being an advocate for safety at an important time and pushing things into the Overton window within an organization.
I imagine Habryka would say that a significant possibility here is that joining an AGI lab will wrongly turn you into an AGI enthusiast. I think biasing effects like that are real, though I also think it’s hard to tell in cases like that how much you are biased v.s. updating correctly on new information, and one could make similar bias claims about the AI x-risk community (e.g. there is social pressure to be doomy; only being exposed to heuristic arguments for doom and few heuristic arguments for optimism will bias you to be doomier than you would be given more information).
It seems like you are confident that the delta in capabilites would outweigh any delta in general alignment sympathy. Is this what you think?
Attempting to manually specify the nature of goodness is a doomed endeavor, of course, but that’s fine, because we can instead specify processes for figuring out (the coherent extrapolation of) what humans value. […] So today’s alignment problems are a few steps removed from tricky moral questions, on my models.
I‘m not convinced that choosing those processes is significantly non-moral. I might be misunderstanding what you are pointing at, but it feels like the fact that being able to choose the voting system gives you power over the vote’s outcome is evidence of this sort of thing—that meta decisions are still importantly tied to decisions.
I think there should be a word for your parsing, maybe “VNM utilitarianism,” but I think most people mean roughly what’s on the wiki page for utilitarianism:
Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all affected individuals
One hypothesis I’ve had is that people with more MIRI-like views tend to be more arrogant themselves. A possible mechanism is that the idea that the world is going to end and that they are the only ones who can save is appealing in a way that shifts their views on certain questions and changes the way they think about AI (e.g. they need less explanation that they are some of the most important people ever, so they spend less time considering why AI might go well by default).
[ETA: In case it wasn’t clear, I am positing subconscious patterns correlated with arrogance that lead to MIRI-like views]