How to (hope­fully eth­i­cally) make money off of AGI

6 Nov 2023 23:35 UTC
127 points
75 comments32 min readLW link

cost es­ti­ma­tion for 2 grid en­ergy stor­age systems

bhauth6 Nov 2023 23:32 UTC
16 points
12 comments7 min readLW link
(www.bhauth.com)

A bet on crit­i­cal pe­ri­ods in neu­ral networks

6 Nov 2023 23:21 UTC
24 points
1 comment6 min readLW link

Job list­ing: Com­mu­ni­ca­tions Gen­er­al­ist /​ Pro­ject Manager

Gretta Duleba6 Nov 2023 20:21 UTC
49 points
7 comments1 min readLW link

Aske­sis: a model of the cerebellum

MadHatter6 Nov 2023 20:19 UTC
7 points
2 comments1 min readLW link
(github.com)

LQPR: An Al­gorithm for Re­in­force­ment Learn­ing with Prov­able Safety Guarantees

MadHatter6 Nov 2023 20:17 UTC
6 points
0 comments1 min readLW link
(github.com)

ACX Meetup Leipzig

Roman Leipe6 Nov 2023 18:33 UTC
1 point
0 comments1 min readLW link

[Question] Does bulemia work?

lc6 Nov 2023 17:58 UTC
6 points
18 comments1 min readLW link

Why build­ing ven­tures in AI Safety is par­tic­u­larly challenging

Heramb6 Nov 2023 16:27 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

What is true is already so. Own­ing up to it doesn’t make it worse.

RamblinDash6 Nov 2023 15:49 UTC
20 points
2 comments1 min readLW link

An illus­tra­tive model of back­fire risks from paus­ing AI research

Maxime Riché6 Nov 2023 14:30 UTC
33 points
3 comments11 min readLW link

Pro­posal for im­prov­ing state of al­ign­ment research

Iknownothing6 Nov 2023 13:55 UTC
2 points
0 comments1 min readLW link

Are lan­guage mod­els good at mak­ing pre­dic­tions?

dynomight6 Nov 2023 13:10 UTC
76 points
14 comments4 min readLW link
(dynomight.net)

Tips, tricks, les­sons and thoughts on host­ing hackathons

gergogaspar6 Nov 2023 11:03 UTC
3 points
0 comments11 min readLW link

An­nounc­ing TAIS 2024

Blaine6 Nov 2023 8:38 UTC
23 points
0 comments1 min readLW link
(tais2024.cc)

Ta­boo Wall

Screwtape6 Nov 2023 3:51 UTC
18 points
0 comments3 min readLW link

Ram­ble on pro­gres­sively con­strained agent design

Iris of Rosebloom5 Nov 2023 23:34 UTC
3 points
0 comments8 min readLW link

When and why should you use the Kelly crite­rion?

5 Nov 2023 23:26 UTC
26 points
25 comments16 min readLW link

On Over­hangs and Tech­nolog­i­cal Change

Roko5 Nov 2023 22:58 UTC
50 points
19 comments2 min readLW link

xAI an­nounces Grok, beats GPT-3.5

nikola5 Nov 2023 22:11 UTC
10 points
6 comments1 min readLW link
(x.ai)

Disen­tan­gling four mo­ti­va­tions for act­ing in ac­cor­dance with UDT

Julian Stastny5 Nov 2023 21:26 UTC
33 points
3 comments7 min readLW link

AI as Su­per-Demagogue

RationalDino5 Nov 2023 21:21 UTC
−2 points
9 comments9 min readLW link

EA orgs’ le­gal struc­ture in­hibits risk tak­ing and in­for­ma­tion shar­ing on the margin

Elizabeth5 Nov 2023 19:13 UTC
135 points
17 comments4 min readLW link

Eric Sch­midt on re­cur­sive self-improvement

nikola5 Nov 2023 19:05 UTC
24 points
3 comments1 min readLW link
(www.youtube.com)

Pivotal Acts might Not be what You Think they are

Johannes C. Mayer5 Nov 2023 17:23 UTC
41 points
13 comments3 min readLW link

The As­sumed In­tent Bias

silentbob5 Nov 2023 16:28 UTC
51 points
13 comments6 min readLW link

Go flash blink­ing lights at printed text right now

lukehmiles5 Nov 2023 7:29 UTC
15 points
9 comments1 min readLW link

Life of GPT

Odd anon5 Nov 2023 4:55 UTC
6 points
2 comments5 min readLW link

Light­ning Talks

Screwtape5 Nov 2023 3:27 UTC
6 points
3 comments4 min readLW link

Utility is not the se­lec­tion target

tailcalled4 Nov 2023 22:48 UTC
24 points
1 comment1 min readLW link

Stuxnet, not Skynet: Hu­man­ity’s dis­em­pow­er­ment by AI

Roko4 Nov 2023 22:23 UTC
106 points
23 comments6 min readLW link

The 6D effect: When com­pa­nies take risks, one email can be very pow­er­ful.

scasper4 Nov 2023 20:08 UTC
261 points
40 comments3 min readLW link

Ge­netic fit­ness is a mea­sure of se­lec­tion strength, not the se­lec­tion target

Kaj_Sotala4 Nov 2023 19:02 UTC
55 points
43 comments18 min readLW link

The Soul Key

Richard_Ngo4 Nov 2023 17:51 UTC
91 points
9 comments8 min readLW link
(www.narrativeark.xyz)

[Linkpost] Con­cept Align­ment as a Pr­ereq­ui­site for Value Alignment

Bogdan Ionut Cirstea4 Nov 2023 17:34 UTC
27 points
0 comments1 min readLW link
(arxiv.org)

We are already in a per­sua­sion-trans­formed world and must take precautions

trevor4 Nov 2023 15:53 UTC
36 points
14 comments6 min readLW link

Be­ing good at the basics

dominicq4 Nov 2023 14:18 UTC
32 points
1 comment3 min readLW link

If a lit­tle is good, is more bet­ter?

DanielFilan4 Nov 2023 7:10 UTC
25 points
15 comments2 min readLW link
(danielfilan.com)

Un­trusted smart mod­els and trusted dumb models

Buck4 Nov 2023 3:06 UTC
80 points
12 comments6 min readLW link

As Many Ideas

Screwtape3 Nov 2023 22:47 UTC
10 points
0 comments4 min readLW link

Paul Chris­ti­ano on Dwarkesh Podcast

ESRogs3 Nov 2023 22:13 UTC
17 points
0 comments1 min readLW link
(www.dwarkeshpatel.com)

De­cep­tion Chess: Game #1

3 Nov 2023 21:13 UTC
104 points
19 comments8 min readLW link

8 ex­am­ples in­form­ing my pes­simism on up­load­ing with­out re­verse engineering

Steven Byrnes3 Nov 2023 20:03 UTC
111 points
12 comments12 min readLW link

In­tegrity in AI Gover­nance and Advocacy

3 Nov 2023 19:52 UTC
134 points
57 comments23 min readLW link

Aver­ag­ing sam­ples from a pop­u­la­tion with log-nor­mal distribution

CrimsonChin3 Nov 2023 19:42 UTC
8 points
2 comments1 min readLW link

Se­cur­ing Civ­i­liza­tion Against Catas­trophic Pandemics

jefftk3 Nov 2023 19:33 UTC
13 points
0 comments1 min readLW link
(dam.gcsp.ch)

No Es­cape from Free Will: The Para­dox of Deter­minism and Embed­ded Agency

gmax3 Nov 2023 17:55 UTC
−9 points
0 comments3 min readLW link

Thoughts on open source AI

Sam Marks3 Nov 2023 15:35 UTC
54 points
17 comments10 min readLW link

Knowl­edge Base 6: Con­sen­sus the­ory of truth

iwis3 Nov 2023 13:56 UTC
−8 points
0 comments1 min readLW link

[Question] Shouldn’t we ‘Just’ Su­per­im­i­tate Low-Res Uploads?

lukemarks3 Nov 2023 7:42 UTC
15 points
2 comments2 min readLW link