Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Stephen McAleese
Karma:
610
Computer science master’s student interested in AI and AI safety.
All
Posts
Comments
New
Top
Old
Summary of “AGI Ruin: A List of Lethalities”
Stephen McAleese
10 Jun 2022 22:35 UTC
44
points
2
comments
8
min read
LW
link
How Do AI Timelines Affect Existential Risk?
Stephen McAleese
29 Aug 2022 16:57 UTC
7
points
9
comments
23
min read
LW
link
Estimating the Current and Future Number of AI Safety Researchers
Stephen McAleese
28 Sep 2022 21:11 UTC
46
points
12
comments
9
min read
LW
link
(forum.effectivealtruism.org)
AGI as a Black Swan Event
Stephen McAleese
4 Dec 2022 23:00 UTC
8
points
8
comments
7
min read
LW
link
GPT-4 Predictions
Stephen McAleese
17 Feb 2023 23:20 UTC
109
points
27
comments
11
min read
LW
link
Retrospective on ‘GPT-4 Predictions’ After the Release of GPT-4
Stephen McAleese
17 Mar 2023 18:34 UTC
22
points
6
comments
6
min read
LW
link
An Overview of the AI Safety Funding Situation
Stephen McAleese
12 Jul 2023 14:54 UTC
62
points
3
comments
1
min read
LW
link
Could We Automate AI Alignment Research?
Stephen McAleese
10 Aug 2023 12:17 UTC
27
points
10
comments
21
min read
LW
link
Back to top