RSS

Antb

Karma: 46

I strongly believe the alignment problem is fundamentally impossible, another form of an undecidable problem. I, however, would prefer to die with dignity. I study methods of minimizing the chances of being wiped out after the advent of ASI.

My current line of research is computational neuroscience for human cognitive augmentation. I work on the heavily flawed theory that the higher the intelligence waterline of humanity, the better the chances we have ASI employs us as part of its goals, instead of ‘recycling’ us as biomass.

What does your philos­o­phy max­i­mize?

Antb1 Mar 2024 16:10 UTC
0 points
1 comment1 min readLW link

Look­ing for Span­ish AI Align­ment Researchers

Antb7 Jan 2023 18:52 UTC
7 points
3 comments1 min readLW link

[Question] What ca­reer ad­vice do you give to soft­ware en­g­ineers?

Antb31 Dec 2022 12:01 UTC
15 points
4 comments1 min readLW link

[Question] Creat­ing su­per­in­tel­li­gence with­out AGI

Antb17 Oct 2022 19:01 UTC
7 points
3 comments1 min readLW link