RSS

Andy E Williams

Karma: −39

A Func­tional Model of In­tel­li­gence May Be Re­quired to Solve Align­ment. Why Can’t We Test That Hy­poth­e­sis?

Andy E Williams15 Jun 2025 20:41 UTC
1 point
0 comments4 min readLW link

Why the Align­ment Cri­sis Asks Coders to Be­come Philoso­phers—and Philoso­phers to Be­come Coders

Andy E Williams15 Jun 2025 20:41 UTC
1 point
0 comments2 min readLW link

The Con­cep­tual Near-Sin­gu­lar­ity: The Mother of All Hal­lu­ci­na­tions?

Andy E Williams15 Jun 2025 20:41 UTC
1 point
0 comments18 min readLW link

The Illu­sion of Iter­a­tive Im­prove­ment: Why AI (and Hu­mans) Fail to Track Their Own Epistemic Drift

Andy E Williams27 Feb 2025 16:26 UTC
1 point
3 comments4 min readLW link

Pre­serv­ing Epistemic Novelty in AI: Ex­per­i­ments, In­sights, and the Case for De­cen­tral­ized Col­lec­tive Intelligence

Andy E Williams8 Feb 2025 10:25 UTC
−4 points
8 comments7 min readLW link

Why Lin­ear AI Safety Hits a Wall and How Frac­tal In­tel­li­gence Un­locks Non-Lin­ear Solutions

Andy E Williams5 Jan 2025 17:08 UTC
−5 points
6 comments5 min readLW link