RSS

Robert Kralisch

Karma: 192

I am Robert Kralisch, one of the organizers for the AI Safety Camp, working as a research coordinator by evaluating and supporting research projects that fit under the umbrella of “technical AI Safety research” and “conceptually sound approaches to AI Alignment”.

I’m also an independent conceptual/​theoretical Alignment Researcher. I have a background in Cognitive Science and I am interested in collaborating on an end-to-end strategy for AGI alignment.

The three main branches that I aim to contribute to are conceptual clarity (what should we mean by agency, intelligence, embodiment, etc), the exploration of more inherently interpretable cognitive architectures, and Simulator theory.

One of my concrete goals is to figure out how to design a cognitively powerful agent such that it does not become a Superoptimiser in the limit.

AI Safety Camp 11

4 Nov 2025 14:56 UTC
6 points
0 comments15 min readLW link

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2026)

6 Sep 2025 13:17 UTC
7 points
0 comments4 min readLW link

AI Safety Camp 10 Outputs

5 Sep 2025 8:27 UTC
19 points
1 comment17 min readLW link

We don’t want to post again “This might be the last AI Safety Camp”

21 Jan 2025 12:03 UTC
36 points
17 comments1 min readLW link
(manifund.org)

Fund­ing Case: AI Safety Camp 11

23 Dec 2024 8:51 UTC
60 points
4 comments6 min readLW link
(manifund.org)

AI Safety Camp 10

26 Oct 2024 11:08 UTC
38 points
9 comments18 min readLW link

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2025)

23 Aug 2024 14:18 UTC
17 points
2 comments4 min readLW link

Re­search Dis­cus­sion on PSCA with Claude Son­net 3.5

Robert Kralisch24 Jul 2024 16:53 UTC
−2 points
0 comments25 min readLW link

The Prop-room and Stage Cog­ni­tive Architecture

Robert Kralisch29 Apr 2024 0:48 UTC
14 points
4 comments14 min readLW link

How are Si­mu­la­tors and Agents re­lated?

Robert Kralisch29 Apr 2024 0:22 UTC
6 points
0 comments7 min readLW link

Ex­tended Embodiment

Robert Kralisch29 Apr 2024 0:18 UTC
8 points
1 comment3 min readLW link

Refer­en­tial Containment

Robert Kralisch29 Apr 2024 0:16 UTC
2 points
4 comments3 min readLW link

Disen­tan­gling Com­pe­tence and Intelligence

Robert Kralisch29 Apr 2024 0:12 UTC
23 points
7 comments6 min readLW link