RSS

StanislavKrym

Karma: 664

Was An­thropic that strate­gi­cally in­com­pe­tent?

StanislavKrym15 Mar 2026 20:11 UTC
8 points
0 comments4 min readLW link
(www.lesswrong.com)

On Steven Byrnes’ ruth­less ASI, (dis)analo­gies with hu­mans and al­ign­ment proposals

StanislavKrym20 Feb 2026 15:32 UTC
9 points
2 comments2 min readLW link

Un­prece­dented Times Re­quire Un­prece­dented Cau­tion When Han­dling Context

StanislavKrym29 Jan 2026 2:53 UTC
4 points
2 comments20 min readLW link
(hazardoustimes.substack.com)

Mech­a­nize Work’s es­say on Un­falsifi­able Doom

StanislavKrym30 Dec 2025 22:57 UTC
10 points
0 comments15 min readLW link
(www.mechanize.work)

Un­pack­ing Jonah Wilberg’s God­dess of Every­thing Else

StanislavKrym29 Dec 2025 18:25 UTC
6 points
2 comments4 min readLW link

An AI-2027-like anal­y­sis of hu­mans’ goals and ethics with con­ser­va­tive results

StanislavKrym5 Dec 2025 21:37 UTC
6 points
0 comments4 min readLW link

Beren’s Es­say on Obe­di­ence and Alignment

StanislavKrym18 Nov 2025 22:50 UTC
33 points
0 comments9 min readLW link
(www.beren.io)

[Question] How does one tell apart re­sults in ethics and de­ci­sion the­ory?

StanislavKrym13 Nov 2025 23:42 UTC
6 points
0 comments2 min readLW link

Fermi Para­dox, Ethics and Astro­nom­i­cal waste

StanislavKrym1 Nov 2025 15:24 UTC
6 points
0 comments1 min readLW link

AI-202X-slow­down: can CoT-based AIs be­come ca­pa­ble of al­ign­ing the ASI?

StanislavKrym15 Oct 2025 22:46 UTC
18 points
0 comments6 min readLW link

SE Gyges’ re­sponse to AI-2027

StanislavKrym15 Aug 2025 21:54 UTC
32 points
13 comments46 min readLW link
(www.verysane.ai)

[Question] Are two po­ten­tially sim­ple tech­niques an ex­am­ple of Mencken’s law?

StanislavKrym29 Jul 2025 23:37 UTC
4 points
4 comments2 min readLW link

AI-202X: a game be­tween hu­mans and AGIs al­igned to differ­ent fu­tures?

StanislavKrym1 Jul 2025 23:37 UTC
5 points
0 comments16 min readLW link

Does the Taiwan in­va­sion pre­vent mankind from ob­tain­ing the al­igned ASI?

StanislavKrym3 Jun 2025 23:35 UTC
−14 points
1 comment5 min readLW link

[Question] Colo­nial­ism in space: Does a col­lec­tion of minds have ex­actly two at­trac­tors?

StanislavKrym27 May 2025 23:35 UTC
7 points
8 comments1 min readLW link

Re­vis­it­ing the ideas for non-neu­ralese architectures

StanislavKrym20 May 2025 23:35 UTC
2 points
0 comments1 min readLW link

[Question] If only the most pow­er­ful AGI is mis­al­igned, can it be used as a dooms­day ma­chine?

StanislavKrym13 May 2025 18:12 UTC
−1 points
0 comments1 min readLW link

[Question] What kind of policy by an AGI would make peo­ple happy?

StanislavKrym6 May 2025 18:05 UTC
1 point
2 comments1 min readLW link

Stanis­lavKrym’s Shortform

StanislavKrym29 Apr 2025 17:57 UTC
3 points
18 comments1 min readLW link

[Question] To what ethics is an AGI ac­tu­ally safely al­ignable?

StanislavKrym20 Apr 2025 17:09 UTC
1 point
6 comments4 min readLW link