Out of curiosity, would you be against mind uploading/whole brain emulation, if it were possible? By “machine”, do you mean nonhuman artifical intelligences or do you mean any form of mind running on a computer?
harryisgamer
I don’t necessarily disagree with you, but that’s not my read of what the Pro-Human Declaration is saying. “No AI Personhood” is in the “Human Agency and Liberty” section, next to stuff like “AI should not be allowed to exploit data about the mental or emotional states of users” and “AI systems should be designed to empower, rather than enfeeble their users”. In context, I would not consider their position on AI personhood to be rooted in x-risk concerns. The first two points of the declaration are “Human Control Is Non-Negotiable” and “Meaningful Human Control”. Fulfilling those points would effectively require the AI systems be aligned, but I see no statement or implication that, if the AI systems were aligned and were moral patients, the writers and signatories of this declaration would change their position. I could be wrong! This is very much a big tent thing. But it does worry me that this line made it into the declaration.
I agree in thinking this might slow OpenAI’s frontier development, but I don’t think it’ll move overall timelines by years. OpenAI is currently building out datacenters, so I would expect a delay of at most one year (although this could change who reaches capabilities breakthroughs first).
I just don’t see why an OpenAI slowdown would effect overall industry timelines that substantially. It might reduce pressure on Anthropic to ship, but I don’t expect it to stall their internal development much.
It doesn’t bother me that Epoch took money from OpenAI. It doesn’t bother me that OpenAI has access to the FrontierMath solutions.
What does bother me is Epoch concealing this information. I certainly assumed FrontierMath was a private eval. Clearly there are people who would not have worked on this if they’d known OpenAI would have access to the dataset. I’m really not sure why Epoch or OpenAI think misleading people about this is beneficial to them—this information coming out now, like this, just means people won’t trust Epoch in the future. Was the data they received via deception from people who wouldn’t have participated really worth burning trust like this?
I was excited about FrontierMath when it was revealed, doubly so when o3 made such impressive progress. I think o3′s results are probably uncontaminated, it would be a very bad move for OpenAI to make fake progress when they could instead make real progress, but concealing this was also a bad move so I don’t know. I really hope Epoch doesn’t pull anything like this with their upcoming computer use benchmark.
(...and I’m shocked they’re trusting verbal agreements from OpenAI about how the data is being used. Is getting stuff in writing really that hard?)
It’s interesting to me that you think mind uploading is impossible but brain emulation could be possible. I was using those words to refer to the same thing! I assume what you think here is that moving a mind from a biological to digital substrate is impossible but copying one is not? To be honest, I’m confused about how consciousness works and don’t really have much of a solid opinion about this.
Anyway, I agree that we need a system which protects existing biological life if we’re going to make lots of digital minds which we ought to grant rights. We also need those minds to respect that system, which requires solving technical alignment at least in the case of nonhuman artifical intelligences. I don’t agree that all entities which can self-copy and have moral value should be destroyed, which what I thought your inital claim was, but given your clarification I don’t think we have quite that much of a disagreement on this topic.