Nat Friedman wants a researcher/journalist for a short AI/rationalism/x-risk project
A new x-risk research lab is recruiting their first round of fellows (via @panickssery)
A “solutionist” third-way on AI safety from Leopold (or thread version)
Jack Devanney on how “ALARA” makes nuclear energy expensive (via @s8mb). See also my review of Devanney’s book
“Instead of a desk, I would like to have a very large lazy susan in my office” (see also my attempts to visualize a soluion)
What’s the best sci-fi about AI to read at this moment in history?
What explains the full history of Irish population?
Why is the US an outlier in post-2000 suicide rate trends?
Do uncertain statements about the future have truth values?
Is any sigmoid an isomorphism between two groups over ℝ and (0,1)?
Evidence for the human capital theory of the Industrial Revolution
Good concise clarification of a key difference in thinking on AI x-risk
Many AGI doomers & anti-doomers make terrible arguments
“Confabulate” instead of “hallucinate” for LLMs?
GPT-4 speaker intros
Practical advice for coping with the feeling of “imminent doomsday”
“One universal in the history of childhood stands above all others. The history of childhood is a history of death”
The Counter-Reformation’s search for heresy was a negative shock to science
As terrible as COVID was, we have progressed a lot in the last 200 years
The survival curve inflates like a sail, but the far end of it doesn’t move much
The “New York City’s Death Rate” chart is ambiguous to me. Is the red line at 2020 a graphical notation pointing to the current year, or is it a spike in the death rate?
It is a spike in the death rate, from covid.