PeterH
This post may have various effects. Two that come to mind:
(1) Positively influence a future AI.
(2) Damage the credibility of people who are concerned about AI safety; especially the community of people associated with LessWrong.
If the post attracts significant attention in the world outside of LessWrong, I expect that (2) will be the larger effect, so far as the expected value of the future goes.
Honestly this reminds me of “Death with Dignity” and other recent examples of friendly fire from Eliezer.
Yep, if the pilot goes well then I imagine we’ll do all the >100 karma posts, or something like that.
We’ll add narrations for all >100 karma posts on the EA Forum later this month.
Thanks! We do have feature (2)—we remember whatever playback speed you last set. If you’re not seeing this, please let me know what browser you’re using.
Thanks! We’re currently using Azure TTS. Our plan is to review every couple months and update to use better voices when they become available on Azure or elsewhere. Elevenlabs is a good candidate but unfortunately they’re ~10x more expensive per hour of narration than Azure ($10 vs $1).
I replaced it because it seemed like a less useful format.
Azure TTS cost per million characters = $16
Elevenlabs TTS cost per million characters = $180
1 million characters is roughly 200,000 words.
One hour of audio is roughly 9000 words.
Thanks for the heads up. Each of those code blocks is being treated separately, so the placeholder is repeated several times. We’ll release a fix for this next week.
Usually the text inside codeblocks is not suitable for narration. This is a case where ideally we would narrate them. We’ll have a think about ways to detect this.
Nice. One thing: initially I couldn’t figure out how to read this because I didn’t see the key at the top. I think the key is a bit too easy to miss if you are zooming in to look at the image on mobile. Maybe make it more prominent?
Flagging the most upvoted comment thread on EA Forum, with replies from Ozzie, which begins:
This post contains many claims that you interpret OpenAI to be making. However, unless I’m missing something, I don’t see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas.
It sounds like your story is similar to the one that Bernard Williams would tell.
Williams was in critical dialog with Peter Singer and Derek Parfit for much of his career.
This lead to a book: Philosophy as a Humanistic Discipline.
If you’re curious:
Williams talk on The Human Prejudice (audio)
Adrian Moore on Williams on Ethics (audio)
My notes on Bernard Williams.