Neu­ro­mor­phic AI

TagLast edit: 29 Aug 2021 16:09 UTC by plex

A Neuromorphic AI (‘neuron-shaped’) is a form of AI where most of the functionality has been copied from the human brain. This implies that its inner workings are not necessarily understood by the creators any further than is necessary to simulate them on a computer. It is considered a more unsafe form of AI than either Whole Brain Emulation or de novo AI because its lacks the high quality replication of human values of the former and the possibility of good theoretical guarantees that the latter may have due to cleaner design.

External Links

Book re­view: “A Thou­sand Brains” by Jeff Hawkins

Steven Byrnes4 Mar 2021 5:10 UTC
111 points
18 comments19 min readLW link

Brain-Com­puter In­ter­faces and AI Alignment

niplav28 Aug 2021 19:48 UTC
33 points
6 comments11 min readLW link

Jeff Hawk­ins on neu­ro­mor­phic AGI within 20 years

Steven Byrnes15 Jul 2019 19:16 UTC
167 points
24 comments12 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven Byrnes2 Oct 2019 12:06 UTC
58 points
23 comments11 min readLW link

FAI and the In­for­ma­tion The­ory of Pleasure

johnsonmx8 Sep 2015 21:16 UTC
14 points
19 comments4 min readLW link

What’s Your Cog­ni­tive Al­gorithm?

Raemon18 Jun 2020 22:16 UTC
71 points
23 comments13 min readLW link

Brain-in­spired AGI and the “life­time an­chor”

Steven Byrnes29 Sep 2021 13:09 UTC
64 points
16 comments13 min readLW link

[In­tro to brain-like-AGI safety] 1. What’s the prob­lem & Why work on it now?

Steven Byrnes26 Jan 2022 15:23 UTC
129 points
19 comments23 min readLW link

[In­tro to brain-like-AGI safety] 4. The “short-term pre­dic­tor”

Steven Byrnes16 Feb 2022 13:12 UTC
58 points
11 comments13 min readLW link

[In­tro to brain-like-AGI safety] 5. The “long-term pre­dic­tor”, and TD learning

Steven Byrnes23 Feb 2022 14:44 UTC
48 points
25 comments21 min readLW link

[In­tro to brain-like-AGI safety] 6. Big pic­ture of mo­ti­va­tion, de­ci­sion-mak­ing, and RL

Steven Byrnes2 Mar 2022 15:26 UTC
55 points
13 comments15 min readLW link

[In­tro to brain-like-AGI safety] 7. From hard­coded drives to fore­sighted plans: A worked example

Steven Byrnes9 Mar 2022 14:28 UTC
67 points
0 comments9 min readLW link

[In­tro to brain-like-AGI safety] 8. Take­aways from neuro 1/​2: On AGI development

Steven Byrnes16 Mar 2022 13:59 UTC
48 points
2 comments14 min readLW link

My take on Ja­cob Can­nell’s take on AGI safety

Steven Byrnes28 Nov 2022 14:01 UTC
63 points
14 comments30 min readLW link

EAI Align­ment Speaker Series #1: Challenges for Safe & Benefi­cial Brain-Like Ar­tifi­cial Gen­eral In­tel­li­gence with Steve Byrnes

23 Mar 2023 14:32 UTC
27 points
0 comments27 min readLW link

The Dark Side of Cog­ni­tion Hypothesis

Cameron Berg3 Oct 2021 20:10 UTC
19 points
1 comment16 min readLW link

AI re­searchers an­nounce Neu­roAI agenda

Cameron Berg24 Oct 2022 0:14 UTC
37 points
12 comments6 min readLW link

Safety of Self-Assem­bled Neu­ro­mor­phic Hardware

Can Rager26 Dec 2022 18:51 UTC
14 points
2 comments10 min readLW link

Large Lan­guage Models Suggest a Path to Ems

anithite29 Dec 2022 2:20 UTC
17 points
2 comments5 min readLW link

Are you sta­bly al­igned?

Seth Herd24 Feb 2023 22:08 UTC
11 points
0 comments2 min readLW link

Hu­man prefer­ences as RL critic val­ues—im­pli­ca­tions for alignment

Seth Herd14 Mar 2023 22:10 UTC
10 points
6 comments6 min readLW link

The al­ign­ment sta­bil­ity problem

Seth Herd26 Mar 2023 2:10 UTC
18 points
5 comments4 min readLW link