Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
StanislavKrym
Karma:
664
All
Posts
Comments
New
Top
Old
Page
1
Was Anthropic that strategically incompetent?
StanislavKrym
15 Mar 2026 20:11 UTC
8
points
0
comments
4
min read
LW
link
(www.lesswrong.com)
On Steven Byrnes’ ruthless ASI, (dis)analogies with humans and alignment proposals
StanislavKrym
20 Feb 2026 15:32 UTC
9
points
2
comments
2
min read
LW
link
Unprecedented Times Require Unprecedented Caution When Handling Context
StanislavKrym
29 Jan 2026 2:53 UTC
4
points
2
comments
20
min read
LW
link
(hazardoustimes.substack.com)
Mechanize Work’s essay on Unfalsifiable Doom
StanislavKrym
30 Dec 2025 22:57 UTC
10
points
0
comments
15
min read
LW
link
(www.mechanize.work)
Unpacking Jonah Wilberg’s Goddess of Everything Else
StanislavKrym
29 Dec 2025 18:25 UTC
6
points
2
comments
4
min read
LW
link
An AI-2027-like analysis of humans’ goals and ethics with conservative results
StanislavKrym
5 Dec 2025 21:37 UTC
6
points
0
comments
4
min read
LW
link
Beren’s Essay on Obedience and Alignment
StanislavKrym
18 Nov 2025 22:50 UTC
33
points
0
comments
9
min read
LW
link
(www.beren.io)
[Question]
How does one tell apart results in ethics and decision theory?
StanislavKrym
13 Nov 2025 23:42 UTC
6
points
0
comments
2
min read
LW
link
Fermi Paradox, Ethics and Astronomical waste
StanislavKrym
1 Nov 2025 15:24 UTC
6
points
0
comments
1
min read
LW
link
AI-202X-slowdown: can CoT-based AIs become capable of aligning the ASI?
StanislavKrym
15 Oct 2025 22:46 UTC
18
points
0
comments
6
min read
LW
link
SE Gyges’ response to AI-2027
StanislavKrym
15 Aug 2025 21:54 UTC
32
points
13
comments
46
min read
LW
link
(www.verysane.ai)
[Question]
Are two potentially simple techniques an example of Mencken’s law?
StanislavKrym
29 Jul 2025 23:37 UTC
4
points
4
comments
2
min read
LW
link
AI-202X: a game between humans and AGIs aligned to different futures?
StanislavKrym
1 Jul 2025 23:37 UTC
5
points
0
comments
16
min read
LW
link
Does the Taiwan invasion prevent mankind from obtaining the aligned ASI?
StanislavKrym
3 Jun 2025 23:35 UTC
−14
points
1
comment
5
min read
LW
link
[Question]
Colonialism in space: Does a collection of minds have exactly two attractors?
StanislavKrym
27 May 2025 23:35 UTC
7
points
8
comments
1
min read
LW
link
Revisiting the ideas for non-neuralese architectures
StanislavKrym
20 May 2025 23:35 UTC
2
points
0
comments
1
min read
LW
link
[Question]
If only the most powerful AGI is misaligned, can it be used as a doomsday machine?
StanislavKrym
13 May 2025 18:12 UTC
−1
points
0
comments
1
min read
LW
link
[Question]
What kind of policy by an AGI would make people happy?
StanislavKrym
6 May 2025 18:05 UTC
1
point
2
comments
1
min read
LW
link
StanislavKrym’s Shortform
StanislavKrym
29 Apr 2025 17:57 UTC
3
points
18
comments
1
min read
LW
link
[Question]
To what ethics is an AGI actually safely alignable?
StanislavKrym
20 Apr 2025 17:09 UTC
1
point
6
comments
4
min read
LW
link
Back to top
Next