RSS

Valentin2026

Karma: 240

How AI re­searchers define AI sen­tience? Par­ti­ci­pate in the poll

Valentin20264 Jul 2025 12:29 UTC
6 points
4 comments1 min readLW link

Mechanis­tic In­ter­pretabil­ity Via Learn­ing Differ­en­tial Equa­tions: AI Safety Camp Pro­ject In­ter­me­di­ate Re­port.

8 May 2025 14:45 UTC
8 points
0 comments7 min readLW link

Men­tor­ship in AGI Safety: Ap­pli­ca­tions for men­tor­ship are open!

28 Jun 2024 14:49 UTC
5 points
0 comments1 min readLW link

Men­tor­ship in AGI Safety (MAGIS) call for men­tors

23 May 2024 18:28 UTC
32 points
3 comments2 min readLW link

What to do if a nu­clear weapon is used in Ukraine?

Valentin202619 Oct 2022 18:43 UTC
13 points
9 comments3 min readLW link

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Valentin20264 Aug 2022 19:12 UTC
5 points
1 comment1 min readLW link

[Question] Im­pact­ful data sci­ence projects

Valentin202611 Apr 2022 4:27 UTC
5 points
2 comments1 min readLW link

A pro­posed sys­tem for ideas jumpstart

Valentin202614 Dec 2021 21:01 UTC
4 points
2 comments3 min readLW link