Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
StanislavKrym
Karma:
274
All
Posts
Comments
New
Top
Old
AI-202X-slowdown: can CoT-based AIs become capable of aligning the ASI?
StanislavKrym
15 Oct 2025 22:46 UTC
21
points
0
comments
6
min read
LW
link
SE Gyges’ response to AI-2027
StanislavKrym
15 Aug 2025 21:54 UTC
29
points
13
comments
46
min read
LW
link
(www.verysane.ai)
[Question]
Are two potentially simple techniques an example of Mencken’s law?
StanislavKrym
29 Jul 2025 23:37 UTC
4
points
4
comments
2
min read
LW
link
AI-202X: a game between humans and AGIs aligned to different futures?
StanislavKrym
1 Jul 2025 23:37 UTC
5
points
0
comments
16
min read
LW
link
Does the Taiwan invasion prevent mankind from obtaining the aligned ASI?
StanislavKrym
3 Jun 2025 23:35 UTC
−14
points
1
comment
5
min read
LW
link
[Question]
Colonialism in space: Does a collection of minds have exactly two attractors?
StanislavKrym
27 May 2025 23:35 UTC
3
points
5
comments
1
min read
LW
link
Revisiting the ideas for non-neuralese architectures
StanislavKrym
20 May 2025 23:35 UTC
2
points
0
comments
1
min read
LW
link
[Question]
If only the most powerful AGI is misaligned, can it be used as a doomsday machine?
StanislavKrym
13 May 2025 18:12 UTC
−1
points
0
comments
1
min read
LW
link
[Question]
What kind of policy by an AGI would make people happy?
StanislavKrym
6 May 2025 18:05 UTC
1
point
2
comments
1
min read
LW
link
StanislavKrym’s Shortform
StanislavKrym
29 Apr 2025 17:57 UTC
3
points
8
comments
1
min read
LW
link
[Question]
To what ethics is an AGI actually safely alignable?
StanislavKrym
20 Apr 2025 17:09 UTC
1
point
6
comments
4
min read
LW
link
[Question]
How likely are the USA to decay and how will it influence the AI development?
StanislavKrym
12 Apr 2025 4:42 UTC
10
points
0
comments
1
min read
LW
link
Do we want too much from a potentially godlike AGI?
StanislavKrym
11 Apr 2025 23:33 UTC
−1
points
0
comments
2
min read
LW
link
[Question]
Is the ethics of interaction with primitive peoples already solved?
StanislavKrym
11 Apr 2025 14:56 UTC
−4
points
0
comments
1
min read
LW
link
[Question]
What are the fundamental differences between teaching the AIs and humans?
StanislavKrym
6 Apr 2025 18:17 UTC
3
points
0
comments
1
min read
LW
link
What does aligning AI to an ideology mean for true alignment?
StanislavKrym
30 Mar 2025 15:12 UTC
1
point
0
comments
8
min read
LW
link
[Question]
How many times faster can the AGI advance the science than humans do?
StanislavKrym
28 Mar 2025 15:16 UTC
0
points
0
comments
1
min read
LW
link
Will the AGIs be able to run the civilisation?
StanislavKrym
28 Mar 2025 4:50 UTC
−7
points
2
comments
3
min read
LW
link
[Question]
Is AGI actually that likely to take off given the world energy consumption?
StanislavKrym
27 Mar 2025 23:13 UTC
2
points
2
comments
1
min read
LW
link
Back to top