RSS

MiguelDev

Karma: 292

help avoid catastrophic AI failures…

Ethically aligned prototype: RLLMv3

Unethically aligned prototype: Paperclip-Todd

Re­search pro­posal: Lev­er­ag­ing Jun­gian archetypes to cre­ate val­ues-based models

MiguelDev5 Mar 2023 17:39 UTC
5 points
2 comments2 min readLW link

[Question] Why Carl Jung is not pop­u­lar in AI Align­ment Re­search?

MiguelDev17 Mar 2023 23:56 UTC
−3 points
13 comments1 min readLW link

Hu­man­ity’s Lack of Unity Will Lead to AGI Catastrophe

MiguelDev19 Mar 2023 19:18 UTC
3 points
2 comments4 min readLW link