Neu­ro­mor­phic AI

TagLast edit: 29 Aug 2021 16:09 UTC by plex

A Neuromorphic AI (‘neuron-shaped’) is a form of AI where most of the functionality has been copied from the human brain. This implies that its inner workings are not necessarily understood by the creators any further than is necessary to simulate them on a computer. It is considered a more unsafe form of AI than either Whole Brain Emulation or de novo AI because its lacks the high quality replication of human values of the former and the possibility of good theoretical guarantees that the latter may have due to cleaner design.

External Links

Book re­view: “A Thou­sand Brains” by Jeff Hawkins

Steven Byrnes4 Mar 2021 5:10 UTC
108 points
18 comments19 min readLW link

Brain-Com­puter In­ter­faces and AI Alignment

niplav28 Aug 2021 19:48 UTC
27 points
6 comments11 min readLW link

Jeff Hawk­ins on neu­ro­mor­phic AGI within 20 years

Steven Byrnes15 Jul 2019 19:16 UTC
165 points
24 comments12 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven Byrnes2 Oct 2019 12:06 UTC
57 points
23 comments11 min readLW link

FAI and the In­for­ma­tion The­ory of Pleasure

johnsonmx8 Sep 2015 21:16 UTC
14 points
19 comments4 min readLW link

What’s Your Cog­ni­tive Al­gorithm?

Raemon18 Jun 2020 22:16 UTC
69 points
23 comments13 min readLW link

Brain-in­spired AGI and the “life­time an­chor”

Steven Byrnes29 Sep 2021 13:09 UTC
64 points
15 comments13 min readLW link

[In­tro to brain-like-AGI safety] 1. What’s the prob­lem & Why work on it now?

Steven Byrnes26 Jan 2022 15:23 UTC
101 points
16 comments23 min readLW link

[In­tro to brain-like-AGI safety] 4. The “short-term pre­dic­tor”

Steven Byrnes16 Feb 2022 13:12 UTC
46 points
11 comments13 min readLW link

[In­tro to brain-like-AGI safety] 5. The “long-term pre­dic­tor”, and TD learning

Steven Byrnes23 Feb 2022 14:44 UTC
30 points
21 comments22 min readLW link

[In­tro to brain-like-AGI safety] 6. Big pic­ture of mo­ti­va­tion, de­ci­sion-mak­ing, and RL

Steven Byrnes2 Mar 2022 15:26 UTC
25 points
13 comments16 min readLW link

[In­tro to brain-like-AGI safety] 7. From hard­coded drives to fore­sighted plans: A worked example

Steven Byrnes9 Mar 2022 14:28 UTC
32 points
0 comments9 min readLW link

[In­tro to brain-like-AGI safety] 8. Take­aways from neuro 1/​2: On AGI development

Steven Byrnes16 Mar 2022 13:59 UTC
29 points
2 comments15 min readLW link

The Dark Side of Cog­ni­tion Hypothesis

Cameron Berg3 Oct 2021 20:10 UTC
19 points
1 comment16 min readLW link