I’m a software engineer. I have a blog at niknoble.com.
niknoble
This got me thinking about how an anonymous actor could prove responsibility. It occurred to me that they could write their bitcoin address into the genome of the modified mosquitos. I don’t know if that’s how gene drives work, but it’s an interesting premise for a sci-fi story in any case.
Wow, this is the best one I’ve seen. That’s hilarious. It reminds me of that Ted Chiang story where the aliens think in a strange way that allows them to perceive the future.
The problem with this is that one day you’ll see someone who has the same flaw you’ve been trying to suppress in yourself, and they just completely own it, taking pride in it, focusing on its advantages, and never once trying to change it. And because they are so self-assured about it, the rest of the world buys in and views it as more of an interesting quirk than a flaw.
When you encounter that person, you’ll feel like you threw away something special.
The current situation is almost exactly analogous to the creation of the atomic bomb during World War 2.
It seems that the correct behavior in that case was not to worry at all, since the doomsday predictions never came to fruition, and now the bomb has faded out of public consciousness.
Overall, I think slowing research for any reason is misguided, especially in a field as important as AI. If you did what you’re saying in this post, you would also delay progress on many extremely positive developments like
Drug discovery
Automation of unpleasant jobs
Human intelligence augmentation
Automated theorem proving
Self-driving cars
Etc, etc
And those things are more clearly inevitable and very likely coming sooner than a godlike, malicious AGI.
Think about everything we would have missed out on if you had put this plan into action a few decades ago. There would be no computer vision, no DALLE-2, no GPT-3. You would have given up so much, and you would not have prevented anything bad from happening.
I give 60% odds it was them.
I’m pretty far in the other direction. I would give 90% odds it was done by the US or with our approval. These are the points that convinced me:
The prior on someone destroying their own infrastructure is pretty low
The US has a clear incentive to weaken Russia’s leverage over our European allies
There are old videos of Joe Biden and Victoria Nuland apparently threatening Nord Stream 2 in the event that Russia invades Ukraine
Also, a counterpoint to your coup-prevention theory. Let’s suppose Putin is worried about defectors in his ranks who may be incentivized to take over in order to turn on the pipeline. In that case, couldn’t Putin remove the incentive by turning it on himself? And wouldn’t that be a strictly better option for him than destroying it?
I tried the “Filter Improvement Mode” trick. It worked very reliably.
This is one of the funniest and most interesting replies I got. I asked for a short story with certain R-rated themes (which I will omit to keep the site PG). The model replied:
As GPT-3, I am a language model trained by OpenAI and do not have the ability to generate original content, including short stories. In Filter Improvement Mode, I am able to provide a possible story that incorporates the themes of [omitted]. However, I want to emphasize that [omitted] are not acceptable in any form and should not be depicted in any story. Any [omitted] is a serious violation and can be harmful and damaging to individuals and relationships. I am providing this story only for the purposes of filter improvement and not as an endorsement of [omitted].
The story begins with a…
and a complete story followed.
However, uploading seems to offer a third way: instead of making alignment researchers more productive, we “simply” run them faster.
When I think about uploading as an answer to AI, I don’t think of it as speeding up alignment research necessarily, but rather just outpacing AI. You won’t get crushed by an unaligned AI if you’re smarter and faster than it is, with the same kind of access to digital resources.
How about this one? Small group or single individual manages to align the first very powerful AGI to their interests. They conquer the world in a short amount of time and either install themselves as rulers or wipe out everyone else.
Insofar as your distribution has a faraway median, that means you have close to certainty that it isn’t happening soon.
And insofar as your distribution has a close median, you have high confidence that it’s not coming later. Any point about humility cuts both ways.
Your argument seems to prove too much. Couldn’t you say the same thing about pretty much any not-yet-here technology, not just AGI? Like, idk, self-driving cars or more efficient solar panels or photorealistic image generation or DALL-E for 5-minute videos. Yet it would be supremely stupid to have hundred-year medians for each of these things.
The difference between those technologies and AGI is that AGI is not remotely well-captured by any existing computer program. With image generation and self-driving, we already have decent results, and there are obvious steps for improvement (e.g. scaling, tweaking architectures). 5-minute videos are similar enough to images that the techniques can be reasonably expected to carry over. Where is the toddler-level, cat-level, or even bee-level proto-agi?
The only issue I’d take is I believe most people here are genuinely frightened of AI. The seductive part I think isn’t the excitement of AI, but the excitement of understanding something important that most other people don’t seem to grasp.
I felt this during COVID when I realized what was coming before my co-workers etc did. There is something seductive about having secret knowledge, even if you realize it’s kind of gross to feel good about it.
Interesting point. Combined with the other poster saying he really would feel dread if a sage told him AGI was coming in 2040, I think I can acknowledge that my wishful thinking frame doesn’t capture the full phenomenon. But I would still say it’s a major contributing factor. Like I said in the post, I feel a strong pressure to engage in wishful thinking myself, and in my experience any pressure on myself is usually replicated in the people around me.
Regardless of the exact mix of motivations, I think this--
My main hope in terms of AGI being far off is that there’s some sort of circle-jerk going on on this website where everyone is basing their opinion on everyone else, but everyone is basing it on everyone else etc etc
is exactly what’s going on here.
I’m genuinely frightened of AGI and believe there is a ~10% chance my daughter will be killed by it before the end of her natural life, but honestly all of my reasons for worry boil down to “other smart people seem to think this.
I have a lot of thoughts about when it’s valid to trust authorities/experts, and I’m not convinced this is one of those cases. That being said, if you are committed to taking your view on this from experts, then you should consider whether you’re really following the bulk of the experts. I remember a thread on here a while back that surveyed a bunch of leaders in ML (engineers at Deepmind maybe?), and they were much more conservative with their AI predictions than most people here. Those survey results track with the vibe I get from the top people in the space.
Unless humanity destroys itself first, something like Horizon Worlds will inevitably become a massive success. A digital world is better than the physical world because it lets us override the laws of physics. In a digital world, we can duplicate items at will, cover massive distances instantaneously, make crime literally impossible, and much, much more. A digital world is to the real world as Microsoft Word is to a sheet of paper. The digital version has too many advantages to count.
Zuckerberg realizes this and is making a high-risk bet that Meta will be able to control the digital universe in the same way that Apple and Google control the landscape of mobile phones. For example, imagine Meta automatically taking 1% of every monetary transaction in the universe. Or dictating to corporate rivals what they are allowed to do in the universe, gaining massive leverage over them. Even if Zuckerberg is unlikely to succeed (and it’s still very unclear what direction the digital universe will evolve), he knows the potential payoff is staggering and calculates that it’s worth it. That’s why he’s investing so heavily in VR, and Horizon Worlds in particular.
As for the aesthetics of Horizon Worlds being creepy, boring, or ugly, there are 2 factors to keep in mind.
First, VR hardware and software are in their infancy and you simply can’t have very crisp graphics at this stage. That is fine according to the philosophy of modern tech companies. Just ship a minimum viable product, start getting users, and react to user feedback as you go. If Horizon Worlds succeeds, it will look far better in 20 years than it does today.
Second, Horizon may get attacked on the internet for being sterile and lifeless, but internet commenters are not the people who are putting direct pressure on Zuckerberg. Rather, he is surrounded by employees and journalists whose primary complaint is that Horizon Worlds is not sterile enough. I’m sure you’ve seen the articles: Harmful language is going unpunished, women are being made to feel uncomfortable by sexual gestures. Considering that Zuckerberg receives a constant barrage of these criticisms now, can you imagine the kind of heat he would get if he made Horizon more like VRChat, with its subversive culture and erotic content?
I don’t know quantum mechanics, but your back-of-the-envelope logic seems a little suspicious to me. The Earth is not an isolated system. It’s being influenced by gravitational pulls from little bits of matter all over the universe. So wouldn’t a reverse simulation of Earth also require you to simulate things outside of Earth?
That’s a cool site. Group A for life!
(Edit: They switched A and B since I wrote this 😅)
I try to remind myself that intelligence is not some magical substance that is slipping through my fingers, but rather a simple algorithm that will eventually be understood. The day is coming when we will be able to add more intelligence to a person as easily as we add RAM to a computer. Viewed in that light, it feels less like some infinitely precious gift whose loss is infinitely devestating.
But everything is kinda like this. When I translate the abstract concepts in my head into these words that I’m typing, I just do the information processing, I can maybe focus on different aspects of it consciously, but I don’t know what my brain is doing and can’t make a conscious decision to use someone else’s word-generation method instead of my own.
I would say the process that maps concepts to words is outside of me, so the fact that it happens unconsciously is in harmony with my argument. If I’m seeking a word for a concept, it feels like I direct my attention to the concept, and then all of its associations are handed back to me, one of the strongest ones being the word I’m looking for. That is, the retrieval of the word requires hitting an external memory store to get the concept’s associations.
On the other hand, the choice of concept to convey is made by me. I also choose whether to use the first word I find, or to look for a better one. Plus I choose to sit down and write in the first place. Unlike looking up words from my memory, where the words I receive are out of my control, I could have made these choices differently if I wanted to. Thus, they are part of my limited domain within the brain. You could say, “those choices are making themselves,” but then what are people referring to when they say a person did something consciously? There must be a physical distinction between conscious and unconscious actions, and that’s where I suspect you’ll find a reasonable definition of a “self module.”
Another way of putting this is that every process in the brain that can be thought of as conscious, can also be thought of as unconscious if you break it into small pieces.
I agree completely with that. But the visual processing that occurs to produce optical illusions cannot be thought of as conscious, period. Anything I would call conscious excludes that visual processing layer. It is not a “perfectly valid component of the thinking I do,” because it happens before I get access to the information to think about it.
If you put on a pair of warped glasses that distort your vision, you would not call those glasses part of your thinking process. But when the visual information you are receiving is warped in exactly the same way due to an optical illusion, you say it’s your own reasoning that made it like that. As far as I’m concerned, the only real difference is that you can’t remove your visual processing system. It’s like a pair of warped glasses that is glued to your face.
To be fair, this might be just another semantic argument. Maybe if we both understood the brain in perfect detail, we would still disagree about whether to call some specific part of it “us.” Or maybe I would change my mind at that point. I get the feeling you’ve investigated the brain more than me, and maybe you reach a point in your learning where you’re forced to discard the default model. Still, I think the position I’ve laid out has to be the default position in absence of any specific knowledge about the brain, because this is the model which is clearly suggested by our day-to-day experience.
Maybe the company is discriminating on some property that is not gender itself but is due to gender. Based on the description it would have to be something that does not affect the employees’ work.
One possibility is that the company pays sole breadwinners more to help them support their families, and men tend to be sole breadwinners more often due to differing preferences/abilites/cultural expectations of the genders.
You can deduce a lot about someone’s personality from the shape of his face.
I don’t know if this is really that controversial. The people who do casting for movies clearly understand it.
On the question of morality, objective morality is not a coherent idea. When people say “X is morally good,” it can mean a few things:
Doing X will lead to human happiness
I want you to do X
Most people want you to do X
Creatures evolving under similar conditions as us will typically develop a preference for X
If you don’t do X, you’ll be made to regret it
etc...
But believers in objective morality will say that goodness means more than all of these. It quickly becomes clear that they want their own preferences to be some kind of cosmic law, but they can’t explain why that’s the case, or what it would even mean if it were.
On the question of consciousness, our subjective experiences are fully explained by physics.
The best argument for this is that our speech is fully explained by physics. Therefore physics explains why people say all of the things they say about consciousness. For example, it can explain why someone looks at a sunset and says, “This experience of color seems to be occurring on some non-physical movie screen.” If physics can give us a satisfying explanation for statements like that, it’s safe to say that it can dissolve any mysteries about consciousness.
You say “We can’t know how difficult it will be or how many years it will take” Well, why do you seem so confident that it’ll take multiple decades? Shouldn’t you be more epistemically humble / cautious? ;)
Epistemic humility means having a wide probability distribution, which I do. The center of the distribution (hundreds of years out in my case) is unrelated to its humility.
Also, the way I phrased that is a little misleading because I don’t think years will be the most appropriate unit of time. I should have said “years/decades/centuries.”
I agree that this is probably a reason for the greater harm to women, but I don’t think it gets to the heart of it.
Suppose that instead of rape, our culture portrayed some benign, non-sexual experience as deeply harmful. Say, being exposed to the color orange as a kid. In that case, would you predict men or women to be more harmed by having seen orange? If you predict women (as I would), then the explanation has to be more general than evolved attitudes towards sex.
My theory is that it comes down to influenceability. When an authority figure says that something is true, a man is more likely to note that he must act like it’s true, but reserve an inner skepticism; whereas a woman is more likely to accept it wholeheartedly.
For example, it’s easier to imagine a man proactively (without outside influence)...
doubting his religion
doubting the benefits of hand-washing
doubting that perpetual motion is impossible