15.2.1.2 The “Reverse-engineer human social instincts” research program
There is project aintelope (see the project announcement here) that operationalizes this by implementing agents according to Steven’s framework. We have applied for LTFF funding.
There is also at least one more researcher actively working on it.
15.2.2.2 The “Easy-to-use super-secure sandbox for AGIs” research program
The Brain-like AGI safety research agenda has proposed multiple research areas, and multiple people are working on some of them:
15.2.1.2 The “Reverse-engineer human social instincts” research program
There is project aintelope (see the project announcement here) that operationalizes this by implementing agents according to Steven’s framework. We have applied for LTFF funding.
There is also at least one more researcher actively working on it.
15.2.2.2 The “Easy-to-use super-secure sandbox for AGIs” research program
Encultured AI is working on this
Note: There is quite some overlap in approach between Shard Theory and Brain-like AGI, which is not mentioned in the post.
Good point, I’ve updated the post to reflect this.
I’m excited for your project :)