RSS

ank

Karma: −9

We modeled the ultimate futures (billions of years from now) for 3+ years, Xerox PARC-style, we got very exciting results. Including AI safety results. This person is more public and shares updates: x.com/​​MaskedMelonUsk

[Question] Share AI Safety Ideas: Both Crazy and Not. №2

ank28 Mar 2025 17:22 UTC
2 points
10 comments1 min readLW link

Give Neo a Chance

ank6 Mar 2025 1:48 UTC
3 points
7 comments7 min readLW link

[Question] Share AI Safety Ideas: Both Crazy and Not

ank1 Mar 2025 19:08 UTC
17 points
28 comments1 min readLW link

Unal­igned AGI & Brief His­tory of Inequality

ank22 Feb 2025 16:26 UTC
−20 points
4 comments7 min readLW link

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ank22 Feb 2025 0:12 UTC
1 point
0 comments6 min readLW link

Places of Lov­ing Grace [Story]

ank18 Feb 2025 23:49 UTC
−1 points
0 comments4 min readLW link

Ar­tifi­cial Static Place In­tel­li­gence: Guaran­teed Alignment

ank15 Feb 2025 11:08 UTC
2 points
2 comments2 min readLW link

Static Place AI Makes Agen­tic AI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ank13 Feb 2025 22:35 UTC
1 point
2 comments11 min readLW link

Ra­tional Effec­tive Utopia & Nar­row Way There: Math-Proven Safe Static Mul­tiver­sal mAX-In­tel­li­gence (AXI), Mul­tiver­sal Align­ment, New Ethico­physics… (Aug 11)

ank11 Feb 2025 3:21 UTC
13 points
8 comments38 min readLW link

How To Prevent a Dystopia

ank29 Jan 2025 14:16 UTC
−3 points
4 comments1 min readLW link

ank’s Shortform

ank21 Jan 2025 16:55 UTC
1 point
19 comments1 min readLW link