“Sure, cried the tenant men, but it’s our land…We were born on it, and we got killed on it, died on it. Even if it’s no good, it’s still ours….That’s what makes ownership, not a paper with numbers on it.”
“We’re sorry. It’s not us. It’s the monster. The bank isn’t like a man.”
”Yes, but the bank is only made of men.”
″No, you’re wrong there—quite wrong there. The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.”
― John Steinbeck, The Grapes of Wrath
FinalFormal2
[Question] How did LW update p(doom) after LLMs blew up?
[Question] What’s the status of TDCS for improving intelligence?
Could someone open a manifold market on the relevant questions here so I could get a better sense of the probabilities involved? Unfortunately, I don’t know the relevant questions or the have the requisite mana.
Personal note- the first time I came into contact with adult gene editing was the youtuber Thought Emporium curing his lactose intolerance, and I was always massively impressed with that and very disappointed the treatment didn’t reach market.
[Question] What subjects are unexpectedly high-utility?
I think you’re overestimating the strength of the arguments and underestimating the strength of the heuristic.
All the Marxist arguments for why capitalism would collapse were probably very strong and intuitive, but they lost to the law of straight lines.
I think you have to imagine yourself in that position and think about how you would feel and think about the problem.
Any arguments for AI safety should be accompanied by images from DALL-E 2.
One of the key factors which makes AI safety such a low priority topic is a complete lack of urgency. Dangerous AI seems like a science fiction element, that’s always a century away, and we can fight against this perception by demonstrating the potential and growth of AI capability.
No demonstration of AI capability has the same immediate visceral power as DALL-E 2.
In longer-form arguments, urgency could also be demonstrated through GPT-3′s prompts, but DALL-E 2 is better, especially if you can also implicitly suggest a greater understanding of concepts by having DALL-E 2 represent something more abstract.
(Inspired by a comment from jcp29)
That seems like a useful heuristic-
I also think there’s an important distinction between using links in a debate frame and in a sharing frame.
I wouldn’t be bothered at all by a comment using acronyms and links, no matter how insular, if the context was just ‘hey this reminds me of HDFT and POUDA,’ a beginner can jump off of that and get down a rabbit hole of interesting concepts.
But if you’re in a debate frame, you’re introducing unnecessary barriers to discussion which feel unfair and disqualifying. At its worst it would be like saying: ‘youre not qualified to debate until you read these five articles.’
In a debate frame I don’t think you should use any unnecessary links or acronyms at all. If you’re linking a whole article it should be because it’s necessary for them to read and understand the whole article for the discussion to continue and it cannot be summarized.
I think I have this principle because in my mind you cannot not debate so therefore you have to read all the links and content included, meaning that links in a sharing context are optional but in a debate context they’re required.
I think on a second read your comment might have been more in the ‘sharing’ frame than I originally thought, but to the extent you were presenting arguments I think you should maximize legibility, to the point of only including links if you make clear contextually or explicitly to what degree the link is optional or just for reference.
[Question] What are the arguments for/against FOOM?
I’m always interested in easy QoL improvements- but I have questions.
Water quality can have surprisingly high impact on QoL
What’s the evidence for this particularly?
What are the important parts of water quality and how do we know this?
[Policy makers]
A couple of years ago there was an AI trained to beat Tetris. Artificial intelligences are very good at learning video games, so it didn’t take long for it to master the game. Soon it was playing so quickly that the game was speeding up to the point it was impossible to win and blocks were slowly stacking up, but before it could be forced to place the last piece, it paused the game.
As long as the game didn’t continue, it could never lose.
When we ask AI to do something, like play Tetris, we have a lot of assumptions about how it can or should approach that goal, but an AI doesn’t have those assumptions. If it looks like it might not achieve its goal through regular means, it doesn’t give up or ask a human for guidance, it pauses the game.
I am two people removed away from a gentleman who has a sleep disorder that means he can never sleep more than two hours, and he’s otherwise healthy. That seems to suggest if there are negative effects from lack of sleep, they aren’t incurred from practical biological necessity, but from what our brains do to enforce sleep.
Also, OP, if you actually don’t have any ill effects from 6 or so hours of sleep, it’s possible you might have a similar genetic condition.
In the comments, I don’t understand why people seem to be so swayed by the comparison of sleep deprivation to fasting or exercise. The only thing that tells you is that things that might seem harmful sometimes aren’t, which is obviously the case. It doesn’t speak at all to whether or not acute sleep deprivation is good for you no more than it speaks to whether not getting occasionally getting blitzed is good for you.
Similarly, the idea of ‘variety is good’ as a general principle of health has so many obvious exceptions, and seems to be just be based on memes about nutrition. It doesn’t seem like a principle you could use to make accurate predictions about health science.
[Question] Steelmanning OpenAI’s Short-Timelines Slow-Takeoff Goal
[Question] How does consciousness interact with architecture?
[Question] What’s the consensus on porn?
This is the equivalent of saying that macbooks are dangerously misaligned because you could physically beat someone’s brains out with one.
I will say baselessly that telling ChatGPT not to say something raises the probability of it actually saying that thing by a significant amount, just by virtue of the text appearing previously in the context window.
Do you think OpenAI is ever going to change GPT models so they can’t represent or pretend to be agents? Is this a big priority in alignment? Is any model that can represent an agent accurately misaligned?
I swear- anything said in support of the proposition ‘AIs are dangerous’ is supported on this site. Actual cult behavior.
I feel like your predictions for 2022 are just a touch over the mark, no? GPT-3 isn’t really ‘obsolete’ yet or is that wrong?
I’m sure it will be in a minute, but I’d probably update that benchmark to probably occurring mid 2023, or potentially whenever GPT-4 gets released.
I really feel like you should be updating slightly longer, but maybe I misunderstand where we’re at right now with chatbots. I would love to hear otherwise.
I don’t think Christians agree that the utility of heaven is finite, I think they think it is infinite they’re just not interested in thinking about the implications
He alleges a psyop. It’s alright to feel nervous about the content.
I don’t like the number of links that you put into your first paragraph. The point of developing a vocabulary for a field is to make communication more efficient so that the field can advance. Do you need an acronym and associated article for ‘pretty obviously unintended/destructive actions,’ or in practice is that just insularizing the discussion?
I hear people complaining about how AI safety only has ~300 people working about it, and how nobody is developing object level understandings and everyone’s thinking from authority, but the more sentences you write like: “Because HFDT will ensure that it’ll robustly avoid POUDA?” the more true that becomes.
I feel very strongly about this.