This reminds me a bit of Parental Timeline Reimprinting: https://thewholenesswork.com/core-transformation/video-9/
Which involves getting into a core state (e.g. Oneness, Love, Inner Peace), then experiencing your entire life up to now through the lens of that core state. When doing coaching oftentimes when someone has a breakthrough /realization/feeling etc I’ll often use the reimprinting process to make sure that they integrate this with all their memories.
I like Ben Kuhn’s solution in this comment: https://www.benkuhn.net/lux/#comment-1595033477
A few 7-way splitters and a whole lot of 100 watt equivalent LEDs.
Except for the long AI winter where most AI research produced very little value. Just because we’ve broken through one constraint and started on another S-curve does not mean we will not hit another constraint.
It seems like there could be a tail risk here of decreasing genetic variation and thereby increasing the impact of something like a pandemic. It also seems like this approach could lead to less diversity in ideas/art/businesses etc because genetic predispositions become a mono-culture.
Do you think either of these things are substantial worries? Am I misunderstanding something about what’s being suggested here?
Trying to follow this. Doesn’t the Y (AI not taking over the world during training) make it less likely that X(AI will take over the world at all)?
Which seems to contradict the argument structure. Perhaps you can give a few more examples to make more clear the structure?
I think there is a community of discord rationalists and tumblr rationalists.
I review Gwern’s post pretty much every time I resume the habit; it doesn’t look like it has been evaluated in connection with physical skills.
It is hard to find, but it’s covered here: https://www.gwern.net/Spaced-repetition#motor-skills
My take is pretty similar to cognitive skills: It works well for simple motor skills but not as well for complex skills.
My initial reaction is that I am almost exactly proposing additional value from using Anki to engage the skill sans context (in addition to whatever actual practice is happening with context).
My experience is basically that this doesn’t work. This seems to track with the research on skill transfer (which is almost always non-existent or has such a small effect that it can’t be measured.)
Gwern covers a bit of research here on when spacing does and doesn’t work:
Personally I’ve found the biggest problem with spaced repetition for skills and habits is that it’s contextless.
Adding the context from multiple skills with different contexts makes it take way more time, and not having the context makes it next to useless for learning the skils.
What hypothesis would you be “testing”? What I’m proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.
I mean, I’m saying get minds with many different complexities, figure out a way to communicate with them, and ask them about their experience.
That would help to figure out if complexity is indeed correlated with observer moments.
But how you test this feels different from the question of whether or not it’s true.
Happy to just chat if you’d like. I’ve battled with similar problems of lack of focus, and done a lot of work myself. Happy to listen.
It seems like you can get quite a bit of data with minds that you can interface with? I think it’s true that you can’t sample the space of all possible minds, but testing this hypothesis on just a few seems like high VoI.
Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)
We could talk to different minds and have them describe their experience, and then compare the number of observer moments to their complexity.
Description complexity is the natural generalization of “speed” and “number of observer moments.
Again this seems to be an empirical question that you can’t just assume.
But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
To me this is simply empirical. Is the computational theory of consciousness true without reservation? Then if the computation exists in our universe, the consciousness exists. Perhaps it’s only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but
I think that only makes sense to do if those minds are literally “less conscious” than other minds though. Otherwise why would I care less about them because they’re more complex?
It does make sense to me to talk about “speed” and “number of observer moments” as part of moral weight, but “complexity of definition” to me only makes sense if those minds experience things differently than I do.
But we do have an obligation to be grateful for such gifts, which may have been the point of the post.
Obligation feels like a weird word here.
Alternative hypothesis: His smile lights up even the darkest spoiler tag.
Related claim: the decrease in driving has reduced traffic fatalities, saving many more lives.
Claim: The decrease in driving during the lock down has significantly increased air quality (https://www.weforum.org/agenda/2020/04/coronavirus-covid19-air-pollution-enviroment-nature-lockdown) , saving many lives. Seeing as we’re dealing with a respiratory disease, the increase in air quality has probably saved even more lives than it would otherwise.
’d strongly disagree with this one—force multipliers are neither good nor bad in themselves.
I think this strongly depends on how long a game you’re playing and how long you have.
In this particular case I think you’re correct. If your timelines are very long perhaps it makes sense to set up a culture where symmetric weapons are punished.