RSS

Neu­ro­mor­phic AI

TagLast edit: 29 Aug 2021 16:09 UTC by plex

A Neuromorphic AI (‘neuron-shaped’) is a form of AI where most of the functionality has been copied from the human brain. This implies that its inner workings are not necessarily understood by the creators any further than is necessary to simulate them on a computer. It is considered a more unsafe form of AI than either Whole Brain Emulation or de novo AI because its lacks the high quality replication of human values of the former and the possibility of good theoretical guarantees that the latter may have due to cleaner design.

External Links

Safety of Self-Assem­bled Neu­ro­mor­phic Hardware

Can Rager26 Dec 2022 18:51 UTC
15 points
2 comments10 min readLW link
(forum.effectivealtruism.org)

Book re­view: “A Thou­sand Brains” by Jeff Hawkins

Steven Byrnes4 Mar 2021 5:10 UTC
116 points
18 comments19 min readLW link

Brain-Com­puter In­ter­faces and AI Alignment

niplav28 Aug 2021 19:48 UTC
35 points
6 comments11 min readLW link

EAI Align­ment Speaker Series #1: Challenges for Safe & Benefi­cial Brain-Like Ar­tifi­cial Gen­eral In­tel­li­gence with Steve Byrnes

23 Mar 2023 14:32 UTC
28 points
0 comments27 min readLW link
(youtu.be)

[In­tro to brain-like-AGI safety] 12. Two paths for­ward: “Con­trol­led AGI” and “So­cial-in­stinct AGI”

Steven Byrnes20 Apr 2022 12:58 UTC
44 points
10 comments16 min readLW link

Con­nec­tomics seems great from an AI x-risk perspective

Steven Byrnes30 Apr 2023 14:38 UTC
92 points
6 comments9 min readLW link

Jeff Hawk­ins on neu­ro­mor­phic AGI within 20 years

Steven Byrnes15 Jul 2019 19:16 UTC
170 points
24 comments12 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven Byrnes2 Oct 2019 12:06 UTC
60 points
23 comments11 min readLW link

FAI and the In­for­ma­tion The­ory of Pleasure

johnsonmx8 Sep 2015 21:16 UTC
14 points
21 comments4 min readLW link

What’s Your Cog­ni­tive Al­gorithm?

Raemon18 Jun 2020 22:16 UTC
73 points
23 comments13 min readLW link

Brain-in­spired AGI and the “life­time an­chor”

Steven Byrnes29 Sep 2021 13:09 UTC
65 points
16 comments13 min readLW link

[In­tro to brain-like-AGI safety] 1. What’s the prob­lem & Why work on it now?

Steven Byrnes26 Jan 2022 15:23 UTC
150 points
19 comments24 min readLW link

[In­tro to brain-like-AGI safety] 4. The “short-term pre­dic­tor”

Steven Byrnes16 Feb 2022 13:12 UTC
64 points
11 comments13 min readLW link

[In­tro to brain-like-AGI safety] 5. The “long-term pre­dic­tor”, and TD learning

Steven Byrnes23 Feb 2022 14:44 UTC
52 points
25 comments21 min readLW link

[In­tro to brain-like-AGI safety] 6. Big pic­ture of mo­ti­va­tion, de­ci­sion-mak­ing, and RL

Steven Byrnes2 Mar 2022 15:26 UTC
68 points
16 comments15 min readLW link

[In­tro to brain-like-AGI safety] 7. From hard­coded drives to fore­sighted plans: A worked example

Steven Byrnes9 Mar 2022 14:28 UTC
78 points
0 comments9 min readLW link

[In­tro to brain-like-AGI safety] 8. Take­aways from neuro 1/​2: On AGI development

Steven Byrnes16 Mar 2022 13:59 UTC
57 points
2 comments14 min readLW link

My take on Ja­cob Can­nell’s take on AGI safety

Steven Byrnes28 Nov 2022 14:01 UTC
71 points
15 comments30 min readLW link1 review

Ca­pa­bil­ities and al­ign­ment of LLM cog­ni­tive architectures

Seth Herd18 Apr 2023 16:29 UTC
80 points
18 comments20 min readLW link

The al­ign­ment sta­bil­ity problem

Seth Herd26 Mar 2023 2:10 UTC
24 points
10 comments4 min readLW link

The Dark Side of Cog­ni­tion Hypothesis

Cameron Berg3 Oct 2021 20:10 UTC
19 points
1 comment16 min readLW link

GPT-4 im­plic­itly val­ues iden­tity preser­va­tion: a study of LMCA iden­tity management

Ozyrus17 May 2023 14:13 UTC
21 points
4 comments13 min readLW link

Large Lan­guage Models Suggest a Path to Ems

anithite29 Dec 2022 2:20 UTC
17 points
2 comments5 min readLW link

Are you sta­bly al­igned?

Seth Herd24 Feb 2023 22:08 UTC
12 points
0 comments2 min readLW link

AI re­searchers an­nounce Neu­roAI agenda

Cameron Berg24 Oct 2022 0:14 UTC
37 points
12 comments6 min readLW link
(arxiv.org)

Hu­man prefer­ences as RL critic val­ues—im­pli­ca­tions for alignment

Seth Herd14 Mar 2023 22:10 UTC
21 points
6 comments6 min readLW link

Cor­rect­ing a mis­con­cep­tion: con­scious­ness does not need 90 billion neu­rons, at all

bvbvbvbvbvbvbvbvbvbvbv31 Mar 2023 16:02 UTC
21 points
19 comments1 min readLW link