You should probably take reverse-causation into account here. I doubt the effect of the school is nearly as strong as you think, since people who want finance jobs are drawn to the schools known for getting people finance jobs. Add to that that the schools known for certain things are the outliers. If you go to a random state school, the students are going to have much more varying interests.
Brendan Long
Any chance you can link to that discussion? I’m really curious.
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.
While military robots might be bad for other reasons, I don’t really see the path from this to doom. If AI powered weaponry doesn’t work as expected, it might kill some people, but it can’t repair or replicate itself or make long-term plans, so it’s not really an extinction risk.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I would be more worried about getting kicked out of parties because you think “the NRC is a good thing”.
More seriously, your opinion on this doesn’t sound very e/acc to me. Isn’t their position that we should accelerate AGI even if we know it will kill everyone, because boo government yay entropy? I think rationalists generally agree that speeding up the development of AGI (that doesn’t kill all of us) is extremely important, and I think a lot of us don’t think current AI is particularly dangerous.
To be fair, the one-in-a-million legislators who make it to the federal level probably are very good at politics. It’s kind of unreasonable to hold them the the standard of knowing (and demonstrating their knowledge of) things about economics or healthcare when their job is to win popularity contests by saying transparently ridiculous things.
I’m not downvoting because this was downvoted far enough, but downvoting doesn’t mean you think the post has committed a logical fallacy. It means you want to see less of that on LessWrong. In this case, I would downvote because complaining about the voting system isn’t interesting or novel.
I realized after asking that my default prompt makes ChatGPT really verbose so I changed the prompt to:
Identify types of human cells using the following marker genes. Identify one cell type for each row. Only provide the cell type name and no other commentary.
And it gave me:
Embryonic stem cells
Induced pluripotent stem cells
Endoderm
Granulosa cells
Oocytes
Pituitary gland cells
Germ cells
Leydig cells
Neurons
Meiotic cells
Sertoli cells
Neural progenitor cells
For 9 it’s actually interesting that if I let it give commentary it says:
CASC3, PGAP1, SLC6A16, CNTNAP4, NPHP1 - This set of genes does not point to a well-defined cell type but could suggest Neuronal Cells or specific types of Neural Precursors based on the presence of neural development and function genes.
For what it’s worth, Comcast is really, really good at providing reliable internet access (providing relatively good managed WiFi routers since WiFi is usually the worst part of the network, proactive detection of downtime and service degredation, improving latency even though it’s not a ‘headline number’, maintaining enough slack that they hit the “up to” advertised speed close to 100% of the time, etc.). The only service issue they have is not caring up upload speeds, but there’s a fundamental tradeoff with the legacy cable network and they’re probably right that most people would rather have faster downloads than faster uploads (still makes me sad though).
I’m probably biased because I worked for the cable industry (around a decade ago), but purely looking at service quality, Comcast is actually very impressive.
So Comcast is stuck with zero credit for when it provides me with near-instant access to an almost infinite amount of great content (much of it for free[1]), but major blame for the small % of the time when it doesn’t.
My disagreement is that I don’t think people are generally upset with Comcast about internet service problems, they’re upset about completely different parts of the business (billing, customer service).
I think this is fair, since “hating” a company typically has to do with how you feel about your interactions with them (do they treat you fairly, nicely, etc.), not how good they are at their jobs.
Taking this the other direction, some local ISP’s provide service that isn’t very “good” (using wireless tech, which has fundamental limitations, having fewer people on-call to fix problems, having fewer people to spread up-front costs to), but are very wholesome and nice to work with. Even if I choose not to use their service because of the limitations, I don’t hate them because they’re doing their best.
I think people hate Comcast because of their customer service and pricing, not the quality of their product. I know plenty of people who used to[1] use Comcast despite hating it because the service was so much better than the competitors.
- ↩︎
My hometown has really good sort-of-city-provided fiber now so no one who cares uses Comcast anymore.
- ↩︎
One issue is figuring out who will watch the supervillain light. If we need someone monitoring everything the AI does, that puts some serious limits on what we can do with it (we can’t use the AI for anything that we want to be cheaper than a human, or anything that requires superhuman response speed).
But it also creates an incentive to bring lots of annoying stuff to vote to force your political enemies to vote for it. For example, if you put “Deport all Rationalists” up for vote as often as possible, you can prevent Rationalists from voting for anything else.
If you’ve got 100-300 kilovolts coming out of a utility and it’s got to step down all the way to six volts, that’s a lot of stepping down.
We just need Nvidia to come out with chips that run on 300 kV directly.
There’s an idea in security where you should avoid weak security because it lets you trick yourself into thinking you’re doing something. For example, if you’re not going to protect passwords, in some sense it’s better to leave them completely plaintext instead of hashing them with MD5. At least in the plaintext case you know you’re not protecting them (and won’t accidentally do something unsafe with it on the assumption that it’s already protected by being hashed).
I feel like this is a case like that:
If you don’t care if these become public, consider just making it public.
If you don’t think they should be public, use something that guarantees that they’re not (like the random ID solution)
The solution you proposed is better than nothing and might protect some email addresses in some cases, but it begs the questions: If you need to protect these sometimes, why not all the time; and if not protecting them sometimes is ok, why bother at all?
(I should say though that there are benefits to making data annoying to access, like that your scheme will protect the data from casual snoopers, and prevent it from being crawled by search engines unless someone goes to the trouble of de-anonymizing and reposting it. My point is mostly just that you should ask if you’re ok with it becoming entirely public or not)
Would plans to stop a rogue AI from hacking things be any different than the normal work people do to prevent hacking?
I realized after writing this that you meant that people’s email addresses are private but their scores are public if you know their email. I’d default to not exposing people’s participation and scores unless they expected that to happen, but maybe that’s less of an issue than I was thinking. The predictability of LessWrong emails still would expose a lot of email addresses.
I’d still recommend the random ID solution though since it’s trivial to reason about (it’s basically a one-time-pad).
I think this would provide security against people casually accessing each other’s scores but wouldn’t provide much protection against a determined attacker.
Some problems:
There’s no protection at all for someone’s scores if the attacker knows their email address (and email addresses aren’t secret)
It’s probably not that hard to build or acquire a list of LessWrong users’ email addresses
Even if you just brute-force this, there are probably patterns in LessWrong users’ email addresses that make them distinguishable from random email addresses (more likely to be @somerationalistgroup.com, @gmail, recognizably American, nerdy, etc.).
A better solution:
Generate a random ID for each user and add it to your data
Email users their random ID
Publish the data with emails removed
(And remove anything else that could be used to reconstruct users, like jobs/locations/etc. if relevant)
Just curious, but if you found a big group house you liked where everyone had kids, would you be interested? I guess it would have to be a pretty big house.