RSS

Rauno Arike

Karma: 621

Hid­den Rea­son­ing in LLMs: A Taxonomy

25 Aug 2025 22:43 UTC
66 points
12 comments12 min readLW link

How we spent our first two weeks as an in­de­pen­dent AI safety re­search group

11 Aug 2025 19:32 UTC
28 points
0 comments10 min readLW link

Ex­tract-and-Eval­u­ate Mon­i­tor­ing Can Sig­nifi­cantly En­hance CoT Mon­i­tor Perfor­mance (Re­search Note)

8 Aug 2025 10:41 UTC
51 points
7 comments10 min readLW link

Aether July 2025 Update

1 Jul 2025 21:08 UTC
24 points
7 comments3 min readLW link

[Question] What faith­ful­ness met­rics should gen­eral claims about CoT faith­ful­ness be based upon?

Rauno Arike8 Apr 2025 15:27 UTC
24 points
0 comments4 min readLW link

On Re­cent Re­sults in LLM La­tent Reasoning

Rauno Arike31 Mar 2025 11:06 UTC
35 points
6 comments13 min readLW link

The Best Lec­ture Series on Every Subject

Rauno Arike24 Mar 2025 20:03 UTC
13 points
1 comment2 min readLW link

Rauno’s Shortform

Rauno Arike15 Nov 2024 12:08 UTC
3 points
34 comments1 min readLW link

A Dialogue on De­cep­tive Align­ment Risks

Rauno Arike25 Sep 2024 16:10 UTC
11 points
0 comments18 min readLW link

[In­terim re­search re­port] Eval­u­at­ing the Goal-Direct­ed­ness of Lan­guage Models

18 Jul 2024 18:19 UTC
40 points
4 comments11 min readLW link

Early Ex­per­i­ments in Re­ward Model In­ter­pre­ta­tion Us­ing Sparse Autoencoders

3 Oct 2023 7:45 UTC
18 points
0 comments5 min readLW link

Ex­plor­ing the Lot­tery Ticket Hypothesis

Rauno Arike25 Apr 2023 20:06 UTC
58 points
3 comments11 min readLW link

[Question] Re­quest for Align­ment Re­search Pro­ject Recommendations

Rauno Arike3 Sep 2022 15:29 UTC
10 points
2 comments1 min readLW link

Coun­ter­ing ar­gu­ments against work­ing on AI safety

Rauno Arike20 Jul 2022 18:23 UTC
7 points
2 comments7 min readLW link

Clar­ify­ing the con­fu­sion around in­ner alignment

Rauno Arike13 May 2022 23:05 UTC
31 points
0 comments11 min readLW link