RSS

Andy E Williams

Karma: −27

A Func­tional Model of In­tel­li­gence May Be Re­quired to Solve Align­ment. Why Can’t We Test That Hy­poth­e­sis?

Andy E WilliamsJun 15, 2025, 8:41 PM
1 point
0 comments4 min readLW link

Why the Align­ment Cri­sis Asks Coders to Be­come Philoso­phers—and Philoso­phers to Be­come Coders

Andy E WilliamsJun 15, 2025, 8:41 PM
1 point
0 comments2 min readLW link

The Con­cep­tual Near-Sin­gu­lar­ity: The Mother of All Hal­lu­ci­na­tions?

Andy E WilliamsJun 15, 2025, 8:41 PM
1 point
0 comments18 min readLW link

The Illu­sion of Iter­a­tive Im­prove­ment: Why AI (and Hu­mans) Fail to Track Their Own Epistemic Drift

Andy E WilliamsFeb 27, 2025, 4:26 PM
1 point
3 comments4 min readLW link

Pre­serv­ing Epistemic Novelty in AI: Ex­per­i­ments, In­sights, and the Case for De­cen­tral­ized Col­lec­tive Intelligence

Andy E WilliamsFeb 8, 2025, 10:25 AM
−4 points
8 comments7 min readLW link

Why Lin­ear AI Safety Hits a Wall and How Frac­tal In­tel­li­gence Un­locks Non-Lin­ear Solutions

Andy E WilliamsJan 5, 2025, 5:08 PM
−5 points
6 comments5 min readLW link