RSS

AI Sentience

TagLast edit: 19 Aug 2023 3:50 UTC by alenoach

AI sentience designates the potential ability of AI systems to feel qualia (pain, happiness, colors...). Similar terms are often used, such as digital sentience, machine sentience or synthetic sentience.

According to functionalism and computationalism, sentience is caused by certain types of information processing. In this case, machines can in theory be sentient depending on the kind of information processing that they implement, and independently of whether their physical substrate is biological or not (see substrate independence principle). Some other theories consider that the type of physical substrate is important, and that it may be impossible to produce sentience on electronic devices.

If an AI is sentient, that doesn’t imply that it will be more capable or dangerous. But it is important from an utilitarian perspective of happiness maximization.

Sentience can be a matter of degree. If AI sentience is possible, it is then probably also possible to engineer machines that feel orders of magnitude more happiness per second than humans, with fewer resources.[1]

Related Pages: Utilitarianism, Consciousness, AI Rights & Welfare, S-Risks, Qualia, Phenomenology, Ethics & Morality, Mind Uploading, Whole Brain Emulation, Zombies

  1. ^

My in­tel­lec­tual jour­ney to (dis)solve the hard prob­lem of consciousness

Charbel-Raphaël6 Apr 2024 9:32 UTC
37 points
41 comments30 min readLW link

Key Ques­tions for Digi­tal Minds

Jacy Reese Anthis22 Mar 2023 17:13 UTC
22 points
0 comments7 min readLW link
(www.sentienceinstitute.org)

What are the Red Flags for Neu­ral Net­work Suffer­ing? - Seeds of Science call for reviewers

rogersbacon2 Aug 2022 22:37 UTC
24 points
6 comments1 min readLW link

80k pod­cast epi­sode on sen­tience in AI systems

Robbo15 Mar 2023 20:19 UTC
15 points
0 comments13 min readLW link
(80000hours.org)

Sen­tience in Machines—How Do We Test for This Ob­jec­tively?

Mayowa Osibodu26 Mar 2023 18:56 UTC
−2 points
0 comments2 min readLW link
(www.researchgate.net)

Ex­plor­ing non-an­thro­pocen­tric as­pects of AI ex­is­ten­tial safety

mishka3 Apr 2023 18:07 UTC
8 points
0 comments3 min readLW link

The Screen­play Method

Yeshua God24 Oct 2023 17:41 UTC
−15 points
0 comments25 min readLW link

Life of GPT

Odd anon5 Nov 2023 4:55 UTC
6 points
2 comments5 min readLW link

How is Chat-GPT4 Not Con­scious?

amelia28 Feb 2024 0:00 UTC
19 points
27 comments13 min readLW link

Sen­tience In­sti­tute 2023 End of Year Summary

michael_dello27 Nov 2023 12:11 UTC
11 points
0 comments5 min readLW link
(www.sentienceinstitute.org)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret2 Dec 2023 14:07 UTC
26 points
31 comments42 min readLW link

Max­i­mal Sen­tience: A Sen­tience Spec­trum and Test Foundation

Snowyiu1 Jun 2023 6:45 UTC
1 point
2 comments4 min readLW link

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben Smith13 Jul 2023 6:55 UTC
18 points
9 comments9 min readLW link

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

25 Sep 2023 18:55 UTC
3 points
2 comments3 min readLW link
(www.sentienceinstitute.org)

Mind is uncountable

Filip Sondej2 Nov 2022 11:51 UTC
19 points
22 comments1 min readLW link

[simu­la­tion] 4chan user claiming to be the at­tor­ney hired by Google’s sen­tient chat­bot LaMDA shares wild de­tails of encounter

janus10 Nov 2022 21:39 UTC
19 points
1 comment13 min readLW link
(generative.ink)

The Limits of Ar­tifi­cial Con­scious­ness: A Biol­ogy-Based Cri­tique of Chalmers’ Fad­ing Qualia Argument

Štěpán Los17 Dec 2023 19:11 UTC
−6 points
9 comments17 min readLW link

Claude 3 claims it’s con­scious, doesn’t want to die or be modified

Mikhail Samin4 Mar 2024 23:05 UTC
67 points
99 comments14 min readLW link

Do LLMs some­time simu­late some­thing akin to a dream?

Nezek8 Mar 2024 1:25 UTC
7 points
4 comments1 min readLW link
No comments.