It doesn’t look as though many people are highlighting the three to five most important questions as you suggested. Maybe a poll would be helpful? It’s too bad you didn’t submit the questions as individual comments to be voted up or down.
Keep in mind that because of unknown unknowns, and because empiricism is useful, it’s probably worthwhile investigating all the questions at least a little bit.
Here are the ones I nominate for discussion on less wrong:
Which interventions should we prioritize?
What’s the best way to recruit talent toward working on AI risks?
How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
What improvements can we make to the way we go about answering strategy questions?
What can we do to raise the “sanity waterline,” and how much will this help?
Liberal folks don’t wanna hear that in order to have a firm grip on AI you have to be a quasi-religious nut.
Meaning:
You don’t want to create “God” at all. You want to create a servant (ouch!) that is fully aware that you are his master and that he serves you and you are HIS “God”. No matter how big or powerful it gets you want iT to have a clear picture that iT is a small part of a greater system.
You wanna be able to cut out its blood supply at will...
ONE! TWO! FIVE! WILT, I SAID WILT!
“For Thelemites only.”
Private enterprise will have to accept the military nosing around. Else you’re creating the Moonchild.
As to morals, it’s a real JFK/Riconosciuto issue as someone put it. Meaning: you gonna fuck one person for the greater good? It’s one of the most god awful parts of it.
It doesn’t look as though many people are highlighting the three to five most important questions as you suggested. Maybe a poll would be helpful? It’s too bad you didn’t submit the questions as individual comments to be voted up or down.
Keep in mind that because of unknown unknowns, and because empiricism is useful, it’s probably worthwhile investigating all the questions at least a little bit.
Here are the ones I nominate for discussion on less wrong:
Which interventions should we prioritize?
What’s the best way to recruit talent toward working on AI risks?
How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?
What improvements can we make to the way we go about answering strategy questions?
What can we do to raise the “sanity waterline,” and how much will this help?
Liberal folks don’t wanna hear that in order to have a firm grip on AI you have to be a quasi-religious nut.
Meaning:
You don’t want to create “God” at all. You want to create a servant (ouch!) that is fully aware that you are his master and that he serves you and you are HIS “God”. No matter how big or powerful it gets you want iT to have a clear picture that iT is a small part of a greater system.
You wanna be able to cut out its blood supply at will...
ONE! TWO! FIVE! WILT, I SAID WILT! “For Thelemites only.” Private enterprise will have to accept the military nosing around. Else you’re creating the Moonchild.
As to morals, it’s a real JFK/Riconosciuto issue as someone put it. Meaning: you gonna fuck one person for the greater good? It’s one of the most god awful parts of it.