“A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn’t stop that.” —HPMOR, Ch. 115
(Not to be confused with the Trevor who works at Open Phil)
Please, please, please make more posts on this issue. I really like what I see here, I’ve found it very helpful, and I need to see more.
Please message me on your thoughts if you ever have anything you’d like to share about this problem, e.g. what works, what doesn’t work, what seems to happen to people, etc.
Is everyone here properly aware of anthropics? e.g. that correctly ordered neurons for human intelligence might have had a 1-in-a-quadrillion chance of ever evolving naturally. But it would still look like a probable evolutionary outcome to us, because that is the course evolution must have taken in order for us to be born.
All the “failed intelligence” offshoots like mammals and insects would still be generated either way, it’s just a question of how improbably difficult it is to replicate the remaining milestones are between them and us. Notably, lesser-brained lifeforms appear to be much more successful e.g. insects and then plants, and the recent neural networks were made by plagarizing the neuron, which is the most visible and easily copied part of the human brain.
It’s only a possibility, but I don’t see why it isn’t doing more to push timelines outward.
Stronger AI is needed to make nuclear missiles that are better at locating their targets when communication is jammed, as well as outmaneuvering other missiles (“juke”).
It’s not regulation, it’s arms control. It’s buried in secrets, including many that are even more disturbing than whatever I’m willing to talk about on a public internet forum.
Policy experience like yours is in short supply, please move to DC and get in contact with EA groups there. You will learn plenty from working with them in-person, and rapidly begin contributing significantly.
Watch out. When markets use real money, people use them to hedge against certain outcomes. It’s a lot like arbitrage in some cases, e.g. for people who only gain or lose any money in specific outcomes like if evergrande defaults on its debts, but they lose very large amounts and your market is comparatively smaller.
It was intentional. I wasn’t aware of the “do not frontpage” option, and even if I was I never seriously thought about the possibility that would happen.
I’m new here, and I didn’t do enough thinking about the tradeoffs between a smaller, tight-knit discussion among specialists and hobbyists, and a broader display to all kinds of people. I’m fine with additional people seeing it here and there, since I wouldn’t have written it if I didn’t think it was highly worth reading, but big public statements come with consequences that can’t be predicted, even on Lesswrong where people are very pragmatic.
I agree wholeheartedly. In addition, it’s worth noting that all militaries around the world live in fear of the CIA, which will give them special attention and burn them if they develop nuclear weapons. As a result, it is unthinkable. However, the US has always been likely to flip-flop on nonproliferation policy and has already done plenty of that with Iran for more than a decade, and of course Iran persisted in spite of the consequences and their existing capabilities to hold the Gulf hostage.
There’s big billionaires and little billionaires. And then there’s military elites and big billionaires. Inequality is prevalent among elites too, and insulation from ambitious and well-connected outsiders is a prerequisite to having any sort of stability in national security decisionmaking. However, personal networks abound and all sorts of things can happen due to chance.
I think that nuclear accidents are very real but they are also overemphasized on lesswrong, and far too few people here know the basics of nuclear deterrence and coercion, which are one of the biggest prerequisites to understanding nuclear standoffs and major conflicts like Ukraine. Deliberate action can be depicted as accidents, dramatically decreasing risks and costs of the deliberate action.
Gain-of-function in the current era is understandable and sane, even if it’s unfortunate. The programs aren’t gathering dust anymore, which they’ve generally appeared to do since WW2. It’s terrible news, obviously, but everyone’s thinking about it, which means everyone’s thinking about everyone thinking about it. Also, in terms of offense-defense, deterrence and MAD can be outmaneuvered if the enemy has many more options than you, e.g. they can do something that’s a little bit insane and your only option for retaliation is to retaliate with something that’s extremely insane.
Yes, this is a toxic clickbait meme-filled shitpost.
No, toxic emotions are never, ever a winning strategy. All toxic thoughts will always sap motivation, reduce intelligence, weaken decision-making, and impede collaboration. Never let yourself be turned toxic by anything, anywhere, any time, no matter how charismatic it might appear in the heat of the moment.
If vaccines and boosters don’t do much against Omicron variants right now, but might do really well against Omicron variants in a month or so, then right now is the perfect time to draw down campaigns encouraging people to get vaccinated.
People expect to get vaccinated only 2 times, 3 times, or 4 times and then be done. If the current strain of vaccine doesn’t do much, then vaccine encouragement clearly should be dialed down, even without the reputation risks that affect the reputation of vaccination in general (forever).
tl;dr people expect vaccination to work well and they expect to permanently stop getting vaccinated after a specific number of doses. So there should be more vaccination when the vaccines work more effectively, i.e. not right now.
I looked at that mask but I’m worried about the fit; if air gets in through the side then it’s P0 (0%), not P100 (100%).
Could you compare it to this mask? It looks like the fit is better. and I know it says “dust mask”, but that’s probably because it’s amazon; if you look up “covid mask” on amazon, you get cloth masks, and no other mask on amazon states that it is meant for covid. I checked, and there should be research confirming that covid droplets are a specific size.
Building on this, there’s also the evolutionary element. Humans continued evolving after civilization, even if the timeframes were too short for much mutation to occur. During early civilizations, and probably most civilizations before the industrial revolution, pessimistic outlooks were extremely common in the real world. As a result, both instinctive and learned pessimism would give an individual more surviving offspring within any given civilization.
“As AI gradually becomes more capable of modelling and understanding its surroundings, the risks associated with glitches and unpredictable behavior will grow. If artificial intelligence continues to expand exponentially, then these risks will grow exponentially as well, and the risks might even grow exponentially shortly after appearing”
“AI cheats. We’ve seen hundreds of unique instances of this. It finds loopholes and exploits them, just like us, only faster. The scary thing is that, every year now, AI becomes more aware of its surroundings, behaving less like a computer program and more like a human that thinks but does not feel”
trevor
“A Muggle security expert would have called it fence-post security, like building a fence-post over a hundred metres high in the middle of the desert. Only a very obliging attacker would try to climb the fence-post. Anyone sensible would just walk around the fence-post, and making the fence-post even higher wouldn’t stop that.” —HPMOR, Ch. 115
(Not to be confused with the Trevor who works at Open Phil)
Please, please, please make more posts on this issue. I really like what I see here, I’ve found it very helpful, and I need to see more.
Please message me on your thoughts if you ever have anything you’d like to share about this problem, e.g. what works, what doesn’t work, what seems to happen to people, etc.
Mr. Bean makes a mistake. (or charlie chaplin)
Far side comic by Gary Larson.
Overlapping mirrors.
Mona Lisa parody.
Oldest House Threshold from Control
A painting of a steampunk city that looks like a mix of San Francisco, Chongqing, and Quya. (Optional: replace Quya with Saint Denis).
Is everyone here properly aware of anthropics? e.g. that correctly ordered neurons for human intelligence might have had a 1-in-a-quadrillion chance of ever evolving naturally. But it would still look like a probable evolutionary outcome to us, because that is the course evolution must have taken in order for us to be born.
All the “failed intelligence” offshoots like mammals and insects would still be generated either way, it’s just a question of how improbably difficult it is to replicate the remaining milestones are between them and us. Notably, lesser-brained lifeforms appear to be much more successful e.g. insects and then plants, and the recent neural networks were made by plagarizing the neuron, which is the most visible and easily copied part of the human brain.
It’s only a possibility, but I don’t see why it isn’t doing more to push timelines outward.
https://en.wikipedia.org/wiki/TERCOM#Comparison_with_other_guidance_systems
Stronger AI is needed to make nuclear missiles that are better at locating their targets when communication is jammed, as well as outmaneuvering other missiles (“juke”).
It’s not regulation, it’s arms control. It’s buried in secrets, including many that are even more disturbing than whatever I’m willing to talk about on a public internet forum.
Policy experience like yours is in short supply, please move to DC and get in contact with EA groups there. You will learn plenty from working with them in-person, and rapidly begin contributing significantly.
Watch out. When markets use real money, people use them to hedge against certain outcomes. It’s a lot like arbitrage in some cases, e.g. for people who only gain or lose any money in specific outcomes like if evergrande defaults on its debts, but they lose very large amounts and your market is comparatively smaller.
This is the best thing I can think of that might change your parent’s minds:
Good luck
It was intentional. I wasn’t aware of the “do not frontpage” option, and even if I was I never seriously thought about the possibility that would happen.
I’m new here, and I didn’t do enough thinking about the tradeoffs between a smaller, tight-knit discussion among specialists and hobbyists, and a broader display to all kinds of people. I’m fine with additional people seeing it here and there, since I wouldn’t have written it if I didn’t think it was highly worth reading, but big public statements come with consequences that can’t be predicted, even on Lesswrong where people are very pragmatic.
I agree wholeheartedly. In addition, it’s worth noting that all militaries around the world live in fear of the CIA, which will give them special attention and burn them if they develop nuclear weapons. As a result, it is unthinkable. However, the US has always been likely to flip-flop on nonproliferation policy and has already done plenty of that with Iran for more than a decade, and of course Iran persisted in spite of the consequences and their existing capabilities to hold the Gulf hostage.
There’s big billionaires and little billionaires. And then there’s military elites and big billionaires. Inequality is prevalent among elites too, and insulation from ambitious and well-connected outsiders is a prerequisite to having any sort of stability in national security decisionmaking. However, personal networks abound and all sorts of things can happen due to chance.
I think that nuclear accidents are very real but they are also overemphasized on lesswrong, and far too few people here know the basics of nuclear deterrence and coercion, which are one of the biggest prerequisites to understanding nuclear standoffs and major conflicts like Ukraine. Deliberate action can be depicted as accidents, dramatically decreasing risks and costs of the deliberate action.
Gain-of-function in the current era is understandable and sane, even if it’s unfortunate. The programs aren’t gathering dust anymore, which they’ve generally appeared to do since WW2. It’s terrible news, obviously, but everyone’s thinking about it, which means everyone’s thinking about everyone thinking about it. Also, in terms of offense-defense, deterrence and MAD can be outmaneuvered if the enemy has many more options than you, e.g. they can do something that’s a little bit insane and your only option for retaliation is to retaliate with something that’s extremely insane.
Yes, this is a toxic clickbait meme-filled shitpost.
No, toxic emotions are never, ever a winning strategy. All toxic thoughts will always sap motivation, reduce intelligence, weaken decision-making, and impede collaboration. Never let yourself be turned toxic by anything, anywhere, any time, no matter how charismatic it might appear in the heat of the moment.
If vaccines and boosters don’t do much against Omicron variants right now, but might do really well against Omicron variants in a month or so, then right now is the perfect time to draw down campaigns encouraging people to get vaccinated.
People expect to get vaccinated only 2 times, 3 times, or 4 times and then be done. If the current strain of vaccine doesn’t do much, then vaccine encouragement clearly should be dialed down, even without the reputation risks that affect the reputation of vaccination in general (forever).
tl;dr people expect vaccination to work well and they expect to permanently stop getting vaccinated after a specific number of doses. So there should be more vaccination when the vaccines work more effectively, i.e. not right now.
I looked at that mask but I’m worried about the fit; if air gets in through the side then it’s P0 (0%), not P100 (100%).
Could you compare it to this mask? It looks like the fit is better. and I know it says “dust mask”, but that’s probably because it’s amazon; if you look up “covid mask” on amazon, you get cloth masks, and no other mask on amazon states that it is meant for covid. I checked, and there should be research confirming that covid droplets are a specific size.
It’s a bit of trouble, but it’s probably worth it since covid causes permanent brain damage and incurable sleep deprivation
Building on this, there’s also the evolutionary element. Humans continued evolving after civilization, even if the timeframes were too short for much mutation to occur. During early civilizations, and probably most civilizations before the industrial revolution, pessimistic outlooks were extremely common in the real world. As a result, both instinctive and learned pessimism would give an individual more surviving offspring within any given civilization.
“As AI gradually becomes more capable of modelling and understanding its surroundings, the risks associated with glitches and unpredictable behavior will grow. If artificial intelligence continues to expand exponentially, then these risks will grow exponentially as well, and the risks might even grow exponentially shortly after appearing”
“AI cheats. We’ve seen hundreds of unique instances of this. It finds loopholes and exploits them, just like us, only faster. The scary thing is that, every year now, AI becomes more aware of its surroundings, behaving less like a computer program and more like a human that thinks but does not feel”