aspiring rationalist
In Rob Bensinger’s typology: AGI-alarmed, AI welfarist, and eventualist.
they/them
aspiring rationalist
In Rob Bensinger’s typology: AGI-alarmed, AI welfarist, and eventualist.
they/them
Labs can provide this kind of information to evaluators instead, so that they don’t have to optimize the CoT for the public.
For what it’s worth, Fatebook already exists for the purpose of helping you make and track your predictions.
Even given all the flaws, I don’t know of a resource for laypeople that’s half as good at explaining what AI is, describing superintelligence, and making the basic case for misalignment risk.
You might not have read aisafety.dance. Although it doesn’t explain in detail what AI and superintelligence are, it did a really good job of describing the specifics of AI safety, possibly on par with the book (I haven’t read the book yet, so this is an educated guess)
Coherence is the property that an agent (always) updates their beliefs through probabilistic conditioning. Usually, one argues that coherence is desirable through Cox’s theorem or the Dutch Book results. This means that coherence is a very brittle thing—you can either be coherent or not, and being approximately Bayesian in most senses still violates the conditions which these results pose as desirable. If you
There is some incomplete text “If you ” here.
The link doesn’t work. Maybe you meant to use the sharing feature? (The way I spotted the issue besides just checking the link is that I found it starts with https://chatgpt.com/c/… which is typically associated with private chats)