RSS

Noosphere89

Karma: 1,820

[Question] How easy/​fast is it for a AGI to hack com­put­ers/​a hu­man brain?

Noosphere8921 Jun 2022 0:34 UTC
0 points
1 comment1 min readLW link

How hu­man­ity would re­spond to slow take­off, with take­aways from the en­tire COVID-19 pan­demic

Noosphere896 Jul 2022 17:52 UTC
4 points
1 comment2 min readLW link

Why AGI Timeline Re­search/​Dis­course Might Be Overrated

Noosphere8920 Jul 2022 20:26 UTC
5 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Which sin­gu­lar­ity schools plus the no sin­gu­lar­ity school was right?

Noosphere8923 Jul 2022 15:16 UTC
9 points
26 comments9 min readLW link

Com­plex­ity No Bar to AI (Or, why Com­pu­ta­tional Com­plex­ity mat­ters less than you think for real life prob­lems)

Noosphere897 Aug 2022 19:55 UTC
17 points
14 comments3 min readLW link
(www.gwern.net)

Can You Upload Your Mind & Live For­ever? From Kurzge­sagt—In a Nutshell

Noosphere8919 Aug 2022 19:32 UTC
3 points
3 comments1 min readLW link
(www.youtube.com)

[Question] In a lack of data, how should you weigh cre­dences in the­o­ret­i­cal physics’s The­o­ries of Every­thing, or TOEs?

Noosphere897 Sep 2022 18:25 UTC
7 points
11 comments1 min readLW link

[Question] Is the game de­sign/​art maxim more gen­er­al­iz­able to crit­i­cism/​praise it­self?

Noosphere8922 Sep 2022 13:19 UTC
4 points
1 comment1 min readLW link

[Question] Does biol­ogy re­li­ably find the global max­i­mum, or at least get close?

Noosphere8910 Oct 2022 20:55 UTC
24 points
71 comments1 min readLW link

When should you defer to ex­per­tise? A use­ful heuris­tic (Cross­post from EA fo­rum)

Noosphere8913 Oct 2022 14:14 UTC
9 points
3 comments2 min readLW link
(forum.effectivealtruism.org)

[Question] How easy is it to su­per­vise pro­cesses vs out­comes?

Noosphere8918 Oct 2022 17:48 UTC
3 points
0 comments1 min readLW link

Log­i­cal De­ci­sion The­o­ries: Our fi­nal failsafe?

Noosphere8925 Oct 2022 12:51 UTC
−7 points
8 comments1 min readLW link
(www.lesswrong.com)

[Question] Is the Orthog­o­nal­ity Th­e­sis true for hu­mans?

Noosphere8927 Oct 2022 14:41 UTC
12 points
20 comments1 min readLW link

A first suc­cess story for Outer Align­ment: In­struc­tGPT

Noosphere898 Nov 2022 22:52 UTC
6 points
1 comment1 min readLW link
(openai.com)

I’ve up­dated to­wards AI box­ing be­ing sur­pris­ingly easy

Noosphere8925 Dec 2022 15:40 UTC
8 points
20 comments2 min readLW link

[Question] How se­ri­ously should we take the hy­poth­e­sis that LW is just wrong on how AI will im­pact the 21st cen­tury?

Noosphere8916 Feb 2023 15:25 UTC
56 points
66 comments1 min readLW link

Some thoughts on the cults LW had

Noosphere8926 Feb 2023 15:46 UTC
−5 points
28 comments1 min readLW link

A case for ca­pa­bil­ities work on AI as net pos­i­tive

Noosphere8927 Feb 2023 21:12 UTC
10 points
37 comments1 min readLW link

[Question] Best ar­gu­ments against the out­side view that AGI won’t be a huge deal, thus we sur­vive.

Noosphere8927 Mar 2023 20:49 UTC
4 points
7 comments1 min readLW link

[Question] Can we get around Godel’s In­com­plete­ness the­o­rems and Tur­ing un­de­cid­able prob­lems via in­finite com­put­ers?

Noosphere8917 Apr 2023 15:14 UTC
−11 points
12 comments1 min readLW link