RSS

Roko

Karma: 6,359

Tur­ing-Test-Pass­ing AI im­plies Aligned AI

Roko31 Dec 2024 19:59 UTC
−9 points
31 comments5 min readLW link

[Question] Is AI al­ign­ment a purely func­tional prop­erty?

Roko15 Dec 2024 21:42 UTC
13 points
8 comments1 min readLW link

[Question] What is MIRI cur­rently do­ing?

Roko14 Dec 2024 2:39 UTC
33 points
14 comments1 min readLW link

The Dis­solu­tion of AI Safety

Roko12 Dec 2024 10:34 UTC
8 points
44 comments1 min readLW link
(www.transhumanaxiology.com)

[Question] What ac­tual bad out­come has “ethics-based” RLHF AI Align­ment already pre­vented?

Roko19 Oct 2024 6:11 UTC
7 points
16 comments1 min readLW link

The ELYSIUM Pro­posal - Ex­trap­o­lated voLi­tions Yield­ing Separate In­di­vi­d­u­al­ized Utopias for Mankind

Roko16 Oct 2024 1:24 UTC
9 points
18 comments1 min readLW link
(transhumanaxiology.substack.com)

A Heuris­tic Proof of Prac­ti­cal Aligned Superintelligence

Roko11 Oct 2024 5:05 UTC
7 points
6 comments1 min readLW link
(transhumanaxiology.substack.com)

A Non­con­struc­tive Ex­is­tence Proof of Aligned Superintelligence

Roko12 Sep 2024 3:20 UTC
0 points
80 comments1 min readLW link
(transhumanaxiology.substack.com)

Ice: The Penul­ti­mate Frontier

Roko13 Jul 2024 23:44 UTC
65 points
56 comments1 min readLW link
(transhumanaxiology.substack.com)

Less Wrong au­to­mated sys­tems are in­ad­ver­tently Cen­sor­ing me

Roko21 Feb 2024 12:57 UTC
2 points
52 comments1 min readLW link

A Back-Of-The-En­velope Calcu­la­tion On How Un­likely The Cir­cum­stan­tial Ev­i­dence Around Covid-19 Is

Roko7 Feb 2024 21:49 UTC
−1 points
36 comments5 min readLW link

The Math of Sus­pi­cious Coincidences

Roko7 Feb 2024 13:32 UTC
25 points
3 comments4 min readLW link

Brute Force Man­u­fac­tured Con­sen­sus is Hid­ing the Crime of the Century

Roko3 Feb 2024 20:36 UTC
217 points
156 comments9 min readLW link

Without Fun­da­men­tal Ad­vances, Re­bel­lion and Coup d’État are the Inevitable Out­comes of Dic­ta­tors & Monar­chs Try­ing to Con­trol Large, Ca­pable Countries

Roko31 Jan 2024 10:14 UTC
27 points
34 comments1 min readLW link

“AI Align­ment” is a Danger­ously Over­loaded Term

Roko15 Dec 2023 14:34 UTC
108 points
100 comments3 min readLW link

[Question] Could Ger­many have won World War I with high prob­a­bil­ity given the benefit of hind­sight?

Roko27 Nov 2023 22:52 UTC
10 points
18 comments1 min readLW link

[Question] Could World War I have been pre­vented given the benefit of hind­sight?

Roko27 Nov 2023 22:39 UTC
16 points
8 comments1 min readLW link

“Why can’t you just turn it off?”

Roko19 Nov 2023 14:46 UTC
48 points
25 comments1 min readLW link

On Over­hangs and Tech­nolog­i­cal Change

Roko5 Nov 2023 22:58 UTC
50 points
19 comments2 min readLW link

Stuxnet, not Skynet: Hu­man­ity’s dis­em­pow­er­ment by AI

Roko4 Nov 2023 22:23 UTC
107 points
24 comments6 min readLW link