While I vaguely agree with you, this goes directly against local opinion. Eliezer tweeted about Elon Musk’s founding of OpenAI, saying that OpenAI’s desire for everyone to have AI has trashed the possibility of alignment in time.
Eliezer’s point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we’d use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here: https://www.alignmentforum.org/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people
I’m not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.
This sounds to me like a bunch of buzzwords thrown together. You have argued in your post that it might be useful to have free software but I have seen no argument why it’s currently a bottleneck for raising the waterline.
Apologies but I’m unclear if you are characterising my post or my comment as “a bunch of buzzwords thrown together” could you clarify? The post’s main thrust was to make the simpler case that the more of our cognition takes place on a medium which we don’t have control over and which is subject to external interest the more concerned we have to be about trusting our cognition. The clearest and most extreme case of this is if your whole consciousness is running one someone else’s hardware and software stack. However, I’ll grant that I’ve not yet make the case in full that this is bottleneck in raising the sanity waterline, perhaps this warrants a follow-up post. In outline: the asymmetric power relationship, lack of accountability, transparency oversight or effective governance of the big proprietary tech platforms is undermining trust in our collective and indeed individual ability to discern the quality of information and this erosion of the epistemic commons is undermining our ability to reason effectively and converge on better models. In Aumann agreement terms, common priors are distorted by amplified availability heuristics in online bubbles, common knowledge is compromised by pseudo science, scientific cargo cults, framed in a way that is hard to distinguish from ‘the real deal’. Also the ‘honest seekers of truth’ assumption is undermined by, bots, trolls, and agent provocateur mascaraing as real people acting on behalf of entities with specific agendas. You only fix this with better governance and I content free software is a major part of that better governance model.
The problem of raising the sanity waterline in the world that currently exists is not one of dealing with emulated consciousness. The word bottleneck makes only sense if you apply it to a particular moment in time. Before the invention of emulated consciousness, any problems with emulated consciousness are not bottlenecks.
Yes, I’m merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it’s partial form seems to be undermining our ability to have rational discourse in the present.
I’ll own up to a downvote on the grounds that I think you added nothing to this conversation and were rude. In the proposed scoring system, I’d give you negative aim and negative truth-seeking. In addition, the post you linked isn’t an answer, but a question, so you didn’t even add information to the argument, so I’d give you negative correctness as well.
If you thought the answers in that thread backed you up:
It’s a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
...
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn’t make it pseudoscience. Falsifiability is the key to demarcation.
That summarizes a few answers.
I agree, I wouldn’t consider AI alignment to be scientific either. How is it a “problem” though?
While I vaguely agree with you, this goes directly against local opinion. Eliezer tweeted about Elon Musk’s founding of OpenAI, saying that OpenAI’s desire for everyone to have AI has trashed the possibility of alignment in time.
Eliezer’s point is well-taken, but the future might have lots of different kinds of software! This post seemed to be mostly talking about software that we’d use for brain-computer interfaces, or for uploaded simulations of human minds, not about AGI. Paul Christiano talks about exactly these kinds of software security concerns for uploaded minds here: https://www.alignmentforum.org/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people
I’m not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.
This sounds to me like a bunch of buzzwords thrown together. You have argued in your post that it might be useful to have free software but I have seen no argument why it’s currently a bottleneck for raising the waterline.
Apologies but I’m unclear if you are characterising my post or my comment as “a bunch of buzzwords thrown together” could you clarify? The post’s main thrust was to make the simpler case that the more of our cognition takes place on a medium which we don’t have control over and which is subject to external interest the more concerned we have to be about trusting our cognition. The clearest and most extreme case of this is if your whole consciousness is running one someone else’s hardware and software stack. However, I’ll grant that I’ve not yet make the case in full that this is bottleneck in raising the sanity waterline, perhaps this warrants a follow-up post. In outline: the asymmetric power relationship, lack of accountability, transparency oversight or effective governance of the big proprietary tech platforms is undermining trust in our collective and indeed individual ability to discern the quality of information and this erosion of the epistemic commons is undermining our ability to reason effectively and converge on better models. In Aumann agreement terms, common priors are distorted by amplified availability heuristics in online bubbles, common knowledge is compromised by pseudo science, scientific cargo cults, framed in a way that is hard to distinguish from ‘the real deal’. Also the ‘honest seekers of truth’ assumption is undermined by, bots, trolls, and agent provocateur mascaraing as real people acting on behalf of entities with specific agendas. You only fix this with better governance and I content free software is a major part of that better governance model.
The problem of raising the sanity waterline in the world that currently exists is not one of dealing with emulated consciousness. The word bottleneck makes only sense if you apply it to a particular moment in time. Before the invention of emulated consciousness, any problems with emulated consciousness are not bottlenecks.
Yes, I’m merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it’s partial form seems to be undermining our ability to have rational discourse in the present.
I didn’t find the full joke/meme again, but, seriously, OpenAi should be renamed to ClosedAI.
AI alignment is pseudo-science.
I’ll own up to a downvote on the grounds that I think you added nothing to this conversation and were rude. In the proposed scoring system, I’d give you negative aim and negative truth-seeking. In addition, the post you linked isn’t an answer, but a question, so you didn’t even add information to the argument, so I’d give you negative correctness as well.
Would you have downvoted the comment if it had been a simple link to what appeared to be a positive view of AI alignment?
Truth can be negative. Is this forum a cult that refuses to acknowledge alternative ways of approaching reality?
If you thought the answers in that thread backed you up:
That summarizes a few answers.