RSS

Gra­di­ent Descent

TagLast edit: 8 Sep 2021 18:16 UTC by Ruby

stub

Hy­poth­e­sis: gra­di­ent de­scent prefers gen­eral circuits

Quintin Pope8 Feb 2022 21:12 UTC
46 points
26 comments11 min readLW link

Why Align­ing an LLM is Hard, and How to Make it Easier

RogerDearnaley23 Jan 2025 6:44 UTC
34 points
3 comments4 min readLW link

The Best Way to Align an LLM: Is In­ner Align­ment Now a Solved Prob­lem?

RogerDearnaley28 May 2025 6:21 UTC
31 points
34 comments9 min readLW link

Why Real­ity Has A Well-Known Math Bias

Linch21 Jul 2025 22:13 UTC
42 points
18 comments1 min readLW link
(linch.substack.com)

A “Bit­ter Les­son” Ap­proach to Align­ing AGI and ASI

RogerDearnaley6 Jul 2024 1:23 UTC
64 points
41 comments24 min readLW link

Gra­di­ent de­scent is not just more effi­cient ge­netic algorithms

leogao8 Sep 2021 16:23 UTC
56 points
14 comments1 min readLW link

Vi­sual Ex­plo­ra­tion of Gra­di­ent Des­cent (many images)

silentbob17 Sep 2025 13:09 UTC
38 points
9 comments20 min readLW link

The Hu­man’s Role in Mesa Optimization

silentbob9 May 2024 12:07 UTC
5 points
0 comments2 min readLW link

We Need To Know About Con­tinual Learning

michael_mjd22 Apr 2023 17:08 UTC
30 points
14 comments4 min readLW link

Con­di­tions for math­e­mat­i­cal equiv­alence of Stochas­tic Gra­di­ent Des­cent and Nat­u­ral Selection

Oliver Sourbut9 May 2022 21:38 UTC
70 points
19 comments8 min readLW link1 review
(www.oliversourbut.net)
No comments.