I just found out that hypnosis is real and not pseudoscience. Apparently the human brain has a zero day such that other humans can find ways to read and write to your memory, and everyone is insisting that this is fine and always happens with full awareness and consent?
Wikipedia says as many as 90% of people are at least moderately susceptible, and depending how successful people have been over the last couple centuries at finding ways to reduce detection risk per instance (e.g. developing and and selling various galaxy-brained misdirection ploys), that seems like very fertile ground for salami-slicing attacks which wear down partially resistant people.
The odds that something like this would be noticed and tested/scaled/optimized by competent cybersecurity experts and power lawyers seems pretty high (e.g. screen refresh rate oscillation in non-visible ways to increase feelings of stress or discomfort and then turning it off whenever the user’s eyes are bout to go over specific kinds of words, slightly altering the color output of specific pixels across the screen in the shape of words and measuring effectiveness based on whether it causally increases the frequency of people using those words, some kind of way to combine these two tactics, something derived from the millions of people on youtube trying hard to look for a video file that hypnotizes them, etc).
It’s really frustrating living in a post-MKUltra world, where every decade our individual sovereignty as humans is increasingly reliant on very senior government officials (who are probably culturally similar to the type of person who goes to business school and have been for centuries) either consistently not succeeding at any of the manipulation science which they are heavily incentivized to diversify their research investment in, or taking them at their word when they insist that they genuinely believe in protecting democracy and the bad things they get caught doing are in service towards that end. Also, they seem to remain uninterested in life extension, possibly due in part to being buried deep in a low-trust dark forest (is trust even possible at all if you’re trapped on a planet with hypnosis?).
Aside from the incredibly obvious move to cover up your fucking webcam right now, are there any non-fake defensive strategies to reduce the risk that someone walks up to you/hacks your computer and takes everything from you? Is there some reliable way to verify that the effects are consistently weak or that scaling isn’t viable? The error bars are always really wide for the prevalence of default-concealed deception (especially when it comes to stuff that wouldn’t scale until the 2010s), making solid epistemics a huge pain to get right, but the situation with directly reading and writing to memory is just way way too extreme to ignore.
The best thing I’ve found so far is to watch a movie, and whenever the screen flashes, any moment you feel weirdly relaxed or any other weird feeling feeling, quickly turn your head and eyes ~60 degrees and gently but firmly bite your tongue.
Doing this a few minutes a day for 30 days might substantially improve resistance to a wide variety of threats.
Gently but firmly biting my tongue, for me, also seems like a potentially very good general-use strategy to return the mind to an alert and clear-minded base state, seems like something Critch recommended e.g. for initiatiing a TAP flowchain. I don’t think this can substitute for a whiteboard, but it sure can nudge you towards one.
I think that “long-term planning risk” and “exfiltration risk” are both really good ways to explain AI risk to policymakers. Also, “grown not built”.
They delineate pretty well some criteria for what the problem is and isn’t. Systems that can’t do that are basically not the concern here (although theoretically there might be a small chance of very strange things ending up growing in the mind-design space that cause human extinction without long-term planning or knowing how to exfiltrate).
I don’t think these are better than the fate-of-humans-vs-gorillas analogy, which is a big reason why most of us are here, but splitting the AI risk situation into easy-to-digest components, instead of logically/mathematically simple components, can go a long way (depending on how immersed the target demographic is in social reality and low-trust).
I just found out that hypnosis is real and not pseudoscience. Apparently the human brain has a zero day such that other humans can find ways to read and write to your memory, and everyone is insisting that this is fine and always happens with full awareness and consent?
Wikipedia says as many as 90% of people are at least moderately susceptible, and depending how successful people have been over the last couple centuries at finding ways to reduce detection risk per instance (e.g. developing and and selling various galaxy-brained misdirection ploys), that seems like very fertile ground for salami-slicing attacks which wear down partially resistant people.
The odds that something like this would be noticed and tested/scaled/optimized by competent cybersecurity experts and power lawyers seems pretty high (e.g. screen refresh rate oscillation in non-visible ways to increase feelings of stress or discomfort and then turning it off whenever the user’s eyes are bout to go over specific kinds of words, slightly altering the color output of specific pixels across the screen in the shape of words and measuring effectiveness based on whether it causally increases the frequency of people using those words, some kind of way to combine these two tactics, something derived from the millions of people on youtube trying hard to look for a video file that hypnotizes them, etc).
It’s really frustrating living in a post-MKUltra world, where every decade our individual sovereignty as humans is increasingly reliant on very senior government officials (who are probably culturally similar to the type of person who goes to business school and have been for centuries) either consistently not succeeding at any of the manipulation science which they are heavily incentivized to diversify their research investment in, or taking them at their word when they insist that they genuinely believe in protecting democracy and the bad things they get caught doing are in service towards that end. Also, they seem to remain uninterested in life extension, possibly due in part to being buried deep in a low-trust dark forest (is trust even possible at all if you’re trapped on a planet with hypnosis?).
Aside from the incredibly obvious move to cover up your fucking webcam right now, are there any non-fake defensive strategies to reduce the risk that someone walks up to you/hacks your computer and takes everything from you? Is there some reliable way to verify that the effects are consistently weak or that scaling isn’t viable? The error bars are always really wide for the prevalence of default-concealed deception (especially when it comes to stuff that wouldn’t scale until the 2010s), making solid epistemics a huge pain to get right, but the situation with directly reading and writing to memory is just way way too extreme to ignore.
The best thing I’ve found so far is to watch a movie, and whenever the screen flashes, any moment you feel weirdly relaxed or any other weird feeling feeling, quickly turn your head and eyes ~60 degrees and gently but firmly bite your tongue.
Doing this a few minutes a day for 30 days might substantially improve resistance to a wide variety of threats.
Gently but firmly biting my tongue, for me, also seems like a potentially very good general-use strategy to return the mind to an alert and clear-minded base state, seems like something Critch recommended e.g. for initiatiing a TAP flowchain. I don’t think this can substitute for a whiteboard, but it sure can nudge you towards one.
I think that “long-term planning risk” and “exfiltration risk” are both really good ways to explain AI risk to policymakers. Also, “grown not built”.
They delineate pretty well some criteria for what the problem is and isn’t. Systems that can’t do that are basically not the concern here (although theoretically there might be a small chance of very strange things ending up growing in the mind-design space that cause human extinction without long-term planning or knowing how to exfiltrate).
I don’t think these are better than the fate-of-humans-vs-gorillas analogy, which is a big reason why most of us are here, but splitting the AI risk situation into easy-to-digest components, instead of logically/mathematically simple components, can go a long way (depending on how immersed the target demographic is in social reality and low-trust).