RSS

Michele Campolo

Karma: 107

Lifelong recursive self-improver, on his way to exploding really intelligently :D

More seriously: my posts are mostly about AI alignment, with an eye towards moral progress. I have a bachelor’s degree in mathematics, I did research at CEEALAR for four years, and now I do research independently.

A fun problem to think about:
Imagine it’s the year 1500. You want to make an AI that is able to tell you that witch hunts are a terrible idea and to convincingly explain why, despite the fact that many people around you seem to think the exact opposite. Assuming you have the technology, how do you do it?

I’m trying to solve that problem, with the difference that we are in the 21st century now (I know, massive spoiler, sorry for that.)

The problem above, and the fact that I’d like to avoid producing AI that can be used for bad purposes, is what motivates my research. If this sounds interesting to you, have a look at these two short posts. If you are looking for something more technical, consider setting some time aside to read these two.

Feel free to reach out if you relate!

You can support my research through Patreon here.

Work in progress:

One more rea­son for AI ca­pa­ble of in­de­pen­dent moral rea­son­ing: al­ign­ment it­self and cause prioritisation

Michele Campolo22 Aug 2025 15:53 UTC
−3 points
0 comments3 min readLW link

Do­ing good… best?

Michele Campolo22 Aug 2025 15:48 UTC
−1 points
6 comments2 min readLW link

With enough knowl­edge, any con­scious agent acts morally

Michele Campolo22 Aug 2025 15:44 UTC
−2 points
9 comments36 min readLW link

Agents that act for rea­sons: a thought experiment

Michele Campolo24 Jan 2024 16:47 UTC
3 points
0 comments3 min readLW link

Free agents

Michele Campolo27 Dec 2023 20:20 UTC
6 points
19 comments14 min readLW link

On value in hu­mans, other an­i­mals, and AI

Michele Campolo31 Jan 2023 23:33 UTC
3 points
17 comments5 min readLW link

Crit­i­cism of the main frame­work in AI alignment

Michele Campolo31 Jan 2023 23:01 UTC
19 points
2 comments6 min readLW link

Some al­ter­na­tive AI safety re­search projects

Michele Campolo28 Jun 2022 14:09 UTC
9 points
0 comments3 min readLW link

From lan­guage to ethics by au­to­mated reasoning

Michele Campolo21 Nov 2021 15:16 UTC
4 points
4 comments6 min readLW link

[Question] What is the strongest ar­gu­ment you know for an­tire­al­ism?

Michele Campolo12 May 2021 10:53 UTC
7 points
58 comments1 min readLW link

Nat­u­ral­ism and AI alignment

Michele Campolo24 Apr 2021 16:16 UTC
5 points
12 comments8 min readLW link

Liter­a­ture Re­view on Goal-Directedness

18 Jan 2021 11:15 UTC
80 points
21 comments31 min readLW link

De­ci­sion The­ory is multifaceted

Michele Campolo13 Sep 2020 22:30 UTC
9 points
12 comments8 min readLW link

Goals and short descriptions

Michele Campolo2 Jul 2020 17:41 UTC
14 points
8 comments5 min readLW link

Wire­head­ing and discontinuity

Michele Campolo18 Feb 2020 10:49 UTC
21 points
4 comments3 min readLW link

Think­ing of tool AIs

Michele Campolo20 Nov 2019 21:47 UTC
6 points
2 comments4 min readLW link