New o1-like model (QwQ) beats Claude 3.5 Son­net with only 32B parameters

Jesse Hoogland27 Nov 2024 22:06 UTC
68 points
4 comments1 min readLW link
(qwenlm.github.io)

“Map of AI Fu­tures”—An in­ter­ac­tive flowchart

swante27 Nov 2024 21:31 UTC
78 points
5 comments2 min readLW link
(swantescholz.github.io)

How to solve the mi­suse prob­lem as­sum­ing that in 10 years the de­fault sce­nario is that AGI agents are ca­pa­ble of syn­thetiz­ing pathogens

jeremtti27 Nov 2024 21:17 UTC
6 points
0 comments9 min readLW link

ARENA 4.0 Im­pact Report

27 Nov 2024 20:51 UTC
45 points
3 comments13 min readLW link

On AI De­tec­tors Re­gard­ing Col­lege Ap­pli­ca­tions

Kaustubh Kislay27 Nov 2024 20:25 UTC
4 points
2 comments2 min readLW link

When the Scien­tific Method Doesn’t Really Help...

casualphysicsenjoyer27 Nov 2024 19:52 UTC
3 points
1 comment5 min readLW link
(chillphysicsenjoyer.substack.com)

Causal in­fer­ence for the home gardener

braces27 Nov 2024 17:55 UTC
42 points
1 comment5 min readLW link

Re­peal the Jones Act of 1920

Zvi27 Nov 2024 15:00 UTC
156 points
28 comments39 min readLW link2 reviews
(thezvi.wordpress.com)

Long Live the Usurper

pleiotroth27 Nov 2024 12:10 UTC
21 points
0 comments5 min readLW link

Hope to live or fear to die?

Knight Lee27 Nov 2024 10:42 UTC
3 points
0 comments1 min readLW link

The Queen’s Dilemma: A Para­dox of Control

Daniel Murfet27 Nov 2024 10:40 UTC
27 points
11 comments3 min readLW link

AXRP Epi­sode 38.2 - Jesse Hoogland on Sin­gu­lar Learn­ing Theory

DanielFilan27 Nov 2024 6:30 UTC
34 points
0 comments10 min readLW link

Hier­ar­chi­cal Agency: A Miss­ing Piece in AI Alignment

Jan_Kulveit27 Nov 2024 5:49 UTC
121 points
23 comments11 min readLW link1 review

Facets and So­cial Networks

jefftk27 Nov 2024 3:40 UTC
15 points
1 comment1 min readLW link
(www.jefftk.com)

Call for eval­u­a­tors: Par­ti­ci­pate in the Euro­pean AI Office work­shop on gen­eral-pur­pose AI mod­els and sys­temic risks

27 Nov 2024 2:54 UTC
30 points
0 comments2 min readLW link

Wager­ing on Will And Worth (Pas­cals Wager for Free Will and Value)

Robert Cousineau27 Nov 2024 0:43 UTC
−1 points
2 comments3 min readLW link

Should you have chil­dren? All LessWrong posts about the topic

Sherrinford26 Nov 2024 23:52 UTC
18 points
0 comments16 min readLW link

Dave Kas­ten’s AGI-by-2027 vignette

davekasten26 Nov 2024 23:20 UTC
49 points
8 comments5 min readLW link

Frac­tals to Quasiparticles

James Camacho26 Nov 2024 20:19 UTC
5 points
0 comments5 min readLW link

[Question] What ep­silon do you sub­tract from “cer­tainty” in your own prob­a­bil­ity es­ti­mates?

Dagon26 Nov 2024 19:13 UTC
13 points
6 comments1 min readLW link

Im­pli­ca­tions—How Con­scious Sig­nifi­cance Could In­form Our lives

James Stephen Brown26 Nov 2024 17:42 UTC
7 points
0 comments13 min readLW link

Work­shop Re­port: Why cur­rent bench­marks ap­proaches are not suffi­cient for safety?

26 Nov 2024 17:20 UTC
3 points
1 comment3 min readLW link

You are not too “ir­ra­tional” to know your prefer­ences.

DaystarEld26 Nov 2024 15:01 UTC
241 points
63 comments13 min readLW link

AI & Li­a­bil­ity Ideathon

Kabir Kumar26 Nov 2024 13:54 UTC
20 points
2 comments4 min readLW link
(lu.ma)

Do Large Lan­guage Models Perform La­tent Multi-Hop Rea­son­ing with­out Ex­ploit­ing Short­cuts?

Bogdan Ionut Cirstea26 Nov 2024 9:58 UTC
10 points
0 comments1 min readLW link
(arxiv.org)

Should you in­crease AI al­ign­ment fund­ing, or in­crease AI reg­u­la­tion?

Knight Lee26 Nov 2024 9:17 UTC
7 points
1 comment4 min readLW link

Miti­gat­ing Geo­mag­netic Storm and EMP Risks to the Elec­tri­cal Grid (Shal­low Dive)

Davidmanheim26 Nov 2024 8:00 UTC
16 points
4 comments6 min readLW link

Filled Cupcakes

jefftk26 Nov 2024 3:20 UTC
21 points
2 comments1 min readLW link
(www.jefftk.com)

notes on pri­ori­tiz­ing tasks & cog­ni­tion-threads

Emrik26 Nov 2024 0:28 UTC
3 points
1 comment4 min readLW link

[Question] Why are there no in­ter­est­ing (1D, 2-state) quan­tum cel­lu­lar au­tomata?

Optimization Process26 Nov 2024 0:11 UTC
29 points
13 comments2 min readLW link

Count­ing AGIs

26 Nov 2024 0:06 UTC
76 points
19 comments32 min readLW link

The Prob­lem with Rea­son­ers by Ai­dan McLaughin

t14n25 Nov 2024 20:24 UTC
12 points
1 comment1 min readLW link
(aidanmclaughlin.notion.site)

Lo­cally op­ti­mal strategies

Chris Lakin25 Nov 2024 18:35 UTC
41 points
7 comments1 min readLW link
(chrislakin.blog)

a space habitat design

bhauth25 Nov 2024 17:28 UTC
55 points
13 comments9 min readLW link
(bhauth.com)

Arthro­pod (non) sentience

Arturo Macias25 Nov 2024 16:01 UTC
9 points
8 comments4 min readLW link

Cross­post: Devel­op­ing the mid­dle ground on po­larized topics

juliawise25 Nov 2024 14:39 UTC
13 points
16 comments3 min readLW link

Two fla­vors of com­pu­ta­tional functionalism

EuanMcLean25 Nov 2024 10:47 UTC
27 points
9 comments4 min readLW link

Align­ment is not intelligent

Donatas Lučiūnas25 Nov 2024 6:59 UTC
−23 points
18 comments5 min readLW link

Zaragoza ACX/​LW Meetup

Fernand025 Nov 2024 6:56 UTC
1 point
0 comments1 min readLW link

A bet­ter “State­ment on AI Risk?”

Knight Lee25 Nov 2024 4:50 UTC
9 points
6 comments3 min readLW link

AI Spe­cial­ized in ML Train­ing Could Create ASI: AGI Is Unnecessary

satopi25 Nov 2024 2:31 UTC
−7 points
1 comment1 min readLW link

I, Token

Ivan Vendrov25 Nov 2024 2:20 UTC
14 points
2 comments3 min readLW link
(nothinghuman.substack.com)

Pas­sages I High­lighted in The Let­ters of J.R.R.Tolkien

Ivan Vendrov25 Nov 2024 1:47 UTC
144 points
39 comments31 min readLW link

Dec­o­rated pedes­trian tunnels

dkl924 Nov 2024 22:16 UTC
0 points
3 comments1 min readLW link
(dkl9.net)

Gothen­burg LW/​ACX meetup

Stefan24 Nov 2024 19:40 UTC
2 points
0 comments1 min readLW link

[Question] Are You More Real If You’re Really For­get­ful?

Thane Ruthenis24 Nov 2024 19:30 UTC
40 points
30 comments5 min readLW link

Per­ils of Gen­er­al­iz­ing from One’s So­cial Group

localdeity24 Nov 2024 15:31 UTC
64 points
1 comment3 min readLW link

Disen­tan­gling Rep­re­sen­ta­tions through Multi-task Learning

Bogdan Ionut Cirstea24 Nov 2024 13:10 UTC
14 points
1 comment1 min readLW link
(arxiv.org)

The U.S. Na­tional Se­cu­rity State is Here to Make AI Even Less Trans­par­ent and Accountable

Matrice Jacobine24 Nov 2024 9:36 UTC
0 points
0 comments2 min readLW link
(www.eff.org)

Mechanis­tic In­ter­pretabil­ity of Llama 3.2 with Sparse Autoencoders

PaulPauls24 Nov 2024 5:45 UTC
19 points
3 comments1 min readLW link
(github.com)