Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Jonas Hallgren
Karma:
1,575
AI Safety person currently working on multi-agent coordination problems.
All
Posts
Comments
New
Top
Old
Page
1
A Taxonomy of Agents: Intro & Request for feedback
Jonas Hallgren
27 Mar 2026 10:03 UTC
13
points
4
comments
3
min read
LW
link
(equilibria1.substack.com)
A Compositional Philosophy of Science for Agent Foundations
Jonas Hallgren
6 Mar 2026 8:40 UTC
25
points
1
comment
13
min read
LW
link
(equilibria1.substack.com)
Systemic Risks and Where to Find Them
Jonas Hallgren
13 Feb 2026 10:51 UTC
14
points
0
comments
20
min read
LW
link
(equilibria1.substack.com)
Spectral Signatures of Gradual Disempowerment
Jonas Hallgren
6 Feb 2026 15:08 UTC
35
points
4
comments
17
min read
LW
link
(equilibria1.substack.com)
The Atoms of Knowledge Aren’t Universal
Jonas Hallgren
3 Feb 2026 10:52 UTC
19
points
4
comments
13
min read
LW
link
(equilibria1.substack.com)
Crystals in NNs: Technical Companion Piece
Jonas Hallgren
28 Dec 2025 10:44 UTC
24
points
4
comments
15
min read
LW
link
Have You Tried Thinking About It As Crystals?
Jonas Hallgren
28 Dec 2025 10:44 UTC
75
points
9
comments
10
min read
LW
link
Intuition Pump: The AI Society
Jonas Hallgren
3 Dec 2025 9:00 UTC
17
points
0
comments
5
min read
LW
link
Cancer; A Crime Story (and other tales of optimization gone wrong)
Jonas Hallgren
7 Nov 2025 7:09 UTC
19
points
2
comments
12
min read
LW
link
System Level Safety Evaluations
markov
and
Jonas Hallgren
29 Sep 2025 13:57 UTC
16
points
0
comments
9
min read
LW
link
(equilibria1.substack.com)
A Lens on the Sharp Left Turn: Optimization Slack
Jonas Hallgren
16 Sep 2025 8:31 UTC
28
points
3
comments
4
min read
LW
link
A Phylogeny of Agents
Jonas Hallgren
and
markov
15 Aug 2025 10:47 UTC
40
points
12
comments
6
min read
LW
link
(substack.com)
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety—A Pilot Retrospective
Alvin Ånestrand
,
Jonas Hallgren
and
Utilop
10 Jan 2025 16:22 UTC
31
points
0
comments
4
min read
LW
link
Meditation insights as phase shifts in your self-model
Jonas Hallgren
7 Jan 2025 10:09 UTC
15
points
3
comments
3
min read
LW
link
Model Integrity: MAI on Value Alignment
Jonas Hallgren
5 Dec 2024 17:11 UTC
6
points
11
comments
1
min read
LW
link
(meaningalignment.substack.com)
Reprograming the Mind: Meditation as a Tool for Cognitive Optimization
Jonas Hallgren
11 Jan 2024 12:03 UTC
34
points
3
comments
11
min read
LW
link
How well does your research adress the theory-practice gap?
Jonas Hallgren
8 Nov 2023 11:27 UTC
20
points
0
comments
10
min read
LW
link
Jonas Hallgren’s Shortform
Jonas Hallgren
11 Oct 2023 9:52 UTC
3
points
44
comments
1
min read
LW
link
Advice for new alignment people: Info Max
Jonas Hallgren
30 May 2023 15:42 UTC
24
points
4
comments
5
min read
LW
link
Respect for Boundaries as non-arbirtrary coordination norms
Jonas Hallgren
9 May 2023 19:42 UTC
9
points
3
comments
7
min read
LW
link
Back to top
Next