RSS

Corm

Karma: 179

Hi, I’m Cormac. I am currently trying to find my way towards the most impact I can have in AI existential risk.

sladebyrd.com

I Had Claude Read Every AI Safety Paper Since 2020, Here’s the DB

Corm3 Mar 2026 17:47 UTC
57 points
13 comments3 min readLW link

101 Hu­mans of New York on the Risks of AI

Corm8 Apr 2026 17:21 UTC
39 points
3 comments7 min readLW link

Side by Side Com­par­i­son of RSP Versions

Corm27 Feb 2026 21:11 UTC
18 points
0 comments1 min readLW link

White-Box At­tacks on the Best Open-Weight Model: CCP Bias vs. Safety Train­ing in Kimi K2.5

Corm3 Mar 2026 17:47 UTC
16 points
2 comments5 min readLW link

An­thropic is Really Push­ing the Fron­tier, What Should We Think?

Corm10 Apr 2026 18:25 UTC
11 points
3 comments11 min readLW link

What Are My Values?

Corm16 Mar 2026 20:43 UTC
7 points
0 comments8 min readLW link

InkSF, an Open­ing on Find­ing the High­est Im­pact in AI Safety and Mov­ing to SF

Corm1 Apr 2026 19:01 UTC
4 points
0 comments4 min readLW link