RSS

rosehadshar

Karma: 1,327

Strate­gic aware­ness tools: de­sign sketches

11 Feb 2026 12:28 UTC
18 points
0 comments1 min readLW link
(www.forethought.org)

De­sign sketches for a more sen­si­ble world

9 Feb 2026 10:22 UTC
25 points
2 comments4 min readLW link
(www.forethought.org)

De­sign sketches for an­gels-on-the-shoulder

9 Feb 2026 9:52 UTC
23 points
0 comments2 min readLW link
(www.forethought.org)

Thoughts on AGI and world government

29 Jan 2026 7:22 UTC
2 points
1 comment7 min readLW link
(www.forethought.org)

Lit re­view of some in­ter­na­tional organisations

rosehadshar14 Jan 2026 7:52 UTC
6 points
0 comments22 min readLW link
(www.forethought.org)

New 80k prob­lem pro­file: ex­treme power concentration

rosehadshar12 Dec 2025 13:05 UTC
48 points
12 comments4 min readLW link

What would adults in the room know about AI risk?

rosehadshar20 Nov 2025 9:11 UTC
18 points
2 comments3 min readLW link

Sense-mak­ing about ex­treme power concentration

rosehadshar11 Sep 2025 10:09 UTC
71 points
25 comments4 min readLW link

Good government

rosehadshar10 Sep 2025 13:22 UTC
26 points
0 comments6 min readLW link

The In­dus­trial Explosion

26 Jun 2025 14:41 UTC
128 points
70 comments15 min readLW link
(www.forethought.org)

AI-en­abled coups: a small group could use AI to seize power

16 Apr 2025 16:51 UTC
137 points
23 comments7 min readLW link

Three Types of In­tel­li­gence Explosion

17 Mar 2025 14:47 UTC
40 points
8 comments3 min readLW link
(www.forethought.org)

In­tel­sat as a Model for In­ter­na­tional AGI Governance

13 Mar 2025 12:58 UTC
45 points
0 comments1 min readLW link
(www.forethought.org)

Should there be just one west­ern AGI pro­ject?

3 Dec 2024 10:11 UTC
78 points
75 comments15 min readLW link
(www.forethought.org)

New re­port: A re­view of the em­piri­cal ev­i­dence for ex­is­ten­tial risk from AI via mis­al­igned power-seeking

4 Apr 2024 23:41 UTC
31 points
5 comments1 min readLW link
(blog.aiimpacts.org)

Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

11 Mar 2024 20:00 UTC
61 points
3 comments9 min readLW link
(forecastingresearch.org)

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadshar5 Sep 2023 15:12 UTC
41 points
7 comments1 min readLW link

Short timelines and slow, con­tin­u­ous take­off as the safest path to AGI

21 Jun 2023 8:56 UTC
65 points
15 comments7 min readLW link

The self-un­al­ign­ment problem

14 Apr 2023 12:10 UTC
159 points
24 comments10 min readLW link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

10 Apr 2023 18:23 UTC
108 points
9 comments8 min readLW link1 review