RSS

Michele Campolo

Karma: 107

Lifelong recursive self-improver, on his way to exploding really intelligently :D

More seriously: my posts are mostly about AI alignment, with an eye towards moral progress. I have a bachelor’s degree in mathematics, I did research at CEEALAR for four years, and now I do research independently.

A fun problem to think about:
Imagine it’s the year 1500. You want to make an AI that is able to tell you that witch hunts are a terrible idea and to convincingly explain why, despite the fact that many people around you seem to think the exact opposite. Assuming you have the technology, how do you do it?

I’m trying to solve that problem, with the difference that we are in the 21st century now (I know, massive spoiler, sorry for that.)

The problem above, and the fact that I’d like to avoid producing AI that can be used for bad purposes, is what motivates my research. If this sounds interesting to you, have a look at these two short posts. If you are looking for something more technical, consider setting some time aside to read these two.

Feel free to reach out if you relate!

You can support my research through Patreon here.

Work in progress:

One more rea­son for AI ca­pa­ble of in­de­pen­dent moral rea­son­ing: al­ign­ment it­self and cause prioritisation

Michele Campolo22 Aug 2025 15:53 UTC
−3 points
0 comments3 min readLW link

Do­ing good… best?

Michele Campolo22 Aug 2025 15:48 UTC
−1 points
6 comments2 min readLW link

With enough knowl­edge, any con­scious agent acts morally

Michele Campolo22 Aug 2025 15:44 UTC
−2 points
9 comments36 min readLW link

Agents that act for rea­sons: a thought experiment

Michele Campolo24 Jan 2024 16:47 UTC
3 points
0 comments3 min readLW link