Pro­ject ideas: Epistemics

Lukas Finnveden5 Jan 2024 23:41 UTC
41 points
4 comments1 min readLW link
(lukasfinnveden.substack.com)

Bench­mark Study #1: MMLU (Pile, MCQ)

Bruce W. Lee5 Jan 2024 21:35 UTC
10 points
0 comments5 min readLW link
(arxiv.org)

Al­most ev­ery­one I’ve met would be well-served think­ing more about what to fo­cus on

Henrik Karlsson5 Jan 2024 21:01 UTC
95 points
8 comments11 min readLW link
(www.henrikkarlsson.xyz)

The Next ChatGPT Mo­ment: AI Avatars

5 Jan 2024 20:14 UTC
37 points
10 comments1 min readLW link

AI Im­pacts 2023 Ex­pert Sur­vey on Progress in AI

habryka5 Jan 2024 19:42 UTC
28 points
1 comment7 min readLW link
(wiki.aiimpacts.org)

Tech­nol­ogy path de­pen­dence and eval­u­at­ing expertise

5 Jan 2024 19:21 UTC
24 points
2 comments15 min readLW link

The Hip­pie Rab­bit Hole -Nuggets of Gold in Rivers of Bullshit

Jonathan Moregård5 Jan 2024 18:27 UTC
37 points
20 comments8 min readLW link
(honestliving.substack.com)

[Question] What tech­ni­cal top­ics could help with bound­aries/​mem­branes?

Chipmonk5 Jan 2024 18:14 UTC
14 points
25 comments1 min readLW link

Catch­ing AIs red-handed

5 Jan 2024 17:43 UTC
82 points
20 comments17 min readLW link

AI Im­pacts Sur­vey: De­cem­ber 2023 Edition

Zvi5 Jan 2024 14:40 UTC
34 points
6 comments10 min readLW link
(thezvi.wordpress.com)

Fore­cast your 2024 with Fatebook

Sage Future5 Jan 2024 14:07 UTC
19 points
0 comments1 min readLW link
(fatebook.io)

Pre­dic­tive model agents are sort of corrigible

Raymond D5 Jan 2024 14:05 UTC
35 points
6 comments3 min readLW link

Strik­ing Im­pli­ca­tions for Learn­ing The­ory, In­ter­pretabil­ity — and Safety?

RogerDearnaley5 Jan 2024 8:46 UTC
35 points
4 comments2 min readLW link

If I ran the zoo

Optimization Process5 Jan 2024 5:14 UTC
18 points
0 comments2 min readLW link

Does AI care about re­al­ity or just its own per­cep­tion?

RedFishBlueFish5 Jan 2024 4:05 UTC
−5 points
8 comments1 min readLW link

MIRI 2024 Mis­sion and Strat­egy Update

Malo5 Jan 2024 0:20 UTC
216 points
44 comments8 min readLW link

Pro­ject ideas: Gover­nance dur­ing ex­plo­sive tech­nolog­i­cal growth

Lukas Finnveden4 Jan 2024 23:51 UTC
13 points
0 comments1 min readLW link
(lukasfinnveden.substack.com)

Hello

S Benfield4 Jan 2024 23:35 UTC
6 points
0 comments2 min readLW link

Us­ing Threats to Achieve So­cially Op­ti­mal Outcomes

StrivingForLegibility4 Jan 2024 23:30 UTC
8 points
0 comments3 min readLW link

Best-Re­spond­ing Is Not Always the Best Response

StrivingForLegibility4 Jan 2024 23:30 UTC
10 points
0 comments3 min readLW link

Safety Data Sheets for Op­ti­miza­tion Processes

StrivingForLegibility4 Jan 2024 23:30 UTC
15 points
1 comment4 min readLW link

The Gears of Argmax

StrivingForLegibility4 Jan 2024 23:30 UTC
11 points
0 comments3 min readLW link

Cel­lu­lar re­pro­gram­ming, pneu­matic launch sys­tems, and ter­raform­ing Mars: Some things I learned about at Fore­sight Vi­sion Weekend

jasoncrawford4 Jan 2024 19:33 UTC
28 points
0 comments8 min readLW link
(rootsofprogress.org)

Deep athe­ism and AI risk

Joe Carlsmith4 Jan 2024 18:58 UTC
131 points
22 comments27 min readLW link

Some Va­ca­tion Photos

johnswentworth4 Jan 2024 17:15 UTC
78 points
0 comments1 min readLW link

AISN #29: Progress on the EU AI Act Plus, the NY Times sues OpenAI for Copy­right In­fringe­ment, and Con­gres­sional Ques­tions about Re­search Stan­dards in AI Safety

4 Jan 2024 16:09 UTC
8 points
0 comments6 min readLW link
(newsletter.safe.ai)

EAG Bay Area Satel­lite event: AI In­sti­tu­tion De­sign Hackathon 2024

beatrice@foresight.org4 Jan 2024 15:02 UTC
1 point
0 comments1 min readLW link

AI #45: To Be Determined

Zvi4 Jan 2024 15:00 UTC
52 points
4 comments31 min readLW link
(thezvi.wordpress.com)

Screen-sup­ported Portable Monitor

jefftk4 Jan 2024 13:50 UTC
16 points
10 comments1 min readLW link
(www.jefftk.com)

[Question] Which in­vest­ments for al­igned-AI out­comes?

tailcalled4 Jan 2024 13:28 UTC
8 points
9 comments2 min readLW link

Non-al­ign­ment pro­ject ideas for mak­ing trans­for­ma­tive AI go well

Lukas Finnveden4 Jan 2024 7:23 UTC
35 points
1 comment1 min readLW link
(lukasfinnveden.substack.com)

Fact Check­ing and Re­tal­i­a­tion Against Sources

jefftk4 Jan 2024 0:41 UTC
7 points
2 comments4 min readLW link
(www.jefftk.com)

In­ves­ti­gat­ing Alter­na­tive Fu­tures: Hu­man and Su­per­in­tel­li­gence In­ter­ac­tion Scenarios

Hiroshi Yamakawa3 Jan 2024 23:46 UTC
1 point
0 comments17 min readLW link

“At­ti­tudes Toward Ar­tifi­cial Gen­eral In­tel­li­gence: Re­sults from Amer­i­can Adults 2021 and 2023”—call for re­view­ers (Seeds of Science)

rogersbacon3 Jan 2024 20:11 UTC
4 points
0 comments1 min readLW link

What’s up with LLMs rep­re­sent­ing XORs of ar­bi­trary fea­tures?

Sam Marks3 Jan 2024 19:44 UTC
154 points
61 comments16 min readLW link

Spirit Air­lines Merger Play

sapphire3 Jan 2024 19:25 UTC
5 points
12 comments1 min readLW link

$300 for the best sci-fi prompt: the results

RomanS3 Jan 2024 19:10 UTC
16 points
19 comments7 min readLW link

Agent mem­branes/​bound­aries and for­mal­iz­ing “safety”

Chipmonk3 Jan 2024 17:55 UTC
23 points
46 comments3 min readLW link

Safety First: safety be­fore full al­ign­ment. The de­on­tic suffi­ciency hy­poth­e­sis.

Chipmonk3 Jan 2024 17:55 UTC
47 points
3 comments3 min readLW link

Prac­ti­cally A Book Re­view: Ap­pendix to “Non­lin­ear’s Ev­i­dence: De­bunk­ing False and Mislead­ing Claims” (ThingOfThings)

tailcalled3 Jan 2024 17:07 UTC
111 points
25 comments2 min readLW link
(thingofthings.substack.com)

Triv­ial Math­e­mat­ics as a Path Forward

ACrackedPot3 Jan 2024 16:41 UTC
−4 points
2 comments2 min readLW link

Copy­right Con­fronta­tion #1

Zvi3 Jan 2024 15:50 UTC
34 points
7 comments18 min readLW link
(thezvi.wordpress.com)

[Question] The­o­ret­i­cally, could we bal­ance the bud­get painlessly?

Logan Zoellner3 Jan 2024 14:46 UTC
4 points
12 comments1 min readLW link

Jo­hannes’ Biography

Johannes C. Mayer3 Jan 2024 13:27 UTC
19 points
0 comments10 min readLW link

What Helped Me—Kale, Blood, CPAP, X-tiamine, Methylphenidate

Johannes C. Mayer3 Jan 2024 13:22 UTC
35 points
12 comments2 min readLW link

[Question] Does LessWrong make a differ­ence when it comes to AI al­ign­ment?

PhilosophicalSoul3 Jan 2024 12:21 UTC
21 points
11 comments1 min readLW link

[Question] Ter­minol­ogy: <some­thing>-ware for ML?

Oliver Sourbut3 Jan 2024 11:42 UTC
17 points
27 comments1 min readLW link

Trad­ing off Lives

jefftk3 Jan 2024 3:40 UTC
53 points
12 comments2 min readLW link
(www.jefftk.com)

MonoPoly Restricted Trust

ymeskhout2 Jan 2024 23:02 UTC
42 points
37 comments9 min readLW link

Agent mem­branes and causal distance

Chipmonk2 Jan 2024 22:43 UTC
19 points
3 comments3 min readLW link