RSS

the gears to ascension

Karma: 4,473

I go by “Lauren (often wrong)” on most public websites these days, eg bluesky, inspired by Often Wrong Soong, Data’s creator in Star Trek.

I want literally every human to get to go to space often and come back to a clean and cozy world.

[updated 202303] Mad Librarian. Bio overview: Crocker’s Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

Please critique eagerly—I try to accept feedback/​Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I’m unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.

.… We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….

I’m self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks’ A*. I don’t defer on timelines at all—my view is it’s obvious to any who read enough research what big labs’ research plans must be to make progress, just not easy to agree on when they’ll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It’s why I call myself a librarian.

Don’t get yourself in denial thinking it’s impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven’t figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let’s see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.

.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

some current favorite general links (somewhat related to safety, but human-focused):

More about me:

:.. make all safe faster: end bit rot, forget no non-totalizing pattern’s soul. ..:

(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it’s quite good, apologies for trivial typos!)

The So­cial Substrate

the gears to ascension9 Feb 2017 7:22 UTC
23 points
15 comments15 min readLW link

Test post

the gears to ascension25 Sep 2017 5:43 UTC
1 point
3 comments1 min readLW link

Dis­cus­sion: Linkposts vs Con­tent Mirroring

the gears to ascension1 Oct 2017 17:18 UTC
10 points
8 comments1 min readLW link

Avoid­ing Selec­tion Bias

the gears to ascension4 Oct 2017 19:10 UTC
20 points
17 comments1 min readLW link

Events section

the gears to ascension11 Oct 2017 16:24 UTC
2 points
6 comments1 min readLW link

Hy­poth­e­sis about how so­cial stuff works and arises

the gears to ascension4 Sep 2018 22:47 UTC
31 points
14 comments6 min readLW link

thought: the prob­lem with less wrong’s epistemic health is that stuff isn’t short form

the gears to ascension5 Sep 2018 8:09 UTC
0 points
27 comments1 min readLW link

“The Bit­ter Les­son”, an ar­ti­cle about com­pute vs hu­man knowl­edge in AI

the gears to ascension21 Jun 2019 17:24 UTC
52 points
14 comments4 min readLW link
(www.incompleteideas.net)

[Question] What can cur­rently be done about the “flood­ing the zone” is­sue?

the gears to ascension20 May 2020 1:02 UTC
6 points
5 comments1 min readLW link

We haven’t quit evolu­tion [short]

the gears to ascension6 Jun 2022 19:07 UTC
5 points
3 comments2 min readLW link

How to make your CPU as fast as a GPU—Ad­vances in Spar­sity w/​ Nir Shavit

the gears to ascension20 Sep 2022 3:48 UTC
2 points
0 comments27 min readLW link
(www.youtube.com)

In­ter­pret­ing sys­tems as solv­ing POMDPs: a step to­wards a for­mal un­der­stand­ing of agency [pa­per link]

the gears to ascension5 Nov 2022 1:06 UTC
13 points
2 comments1 min readLW link
(www.semanticscholar.org)

Rele­vant to nat­u­ral ab­strac­tions: Eu­clidean Sym­me­try Equiv­ar­i­ant Ma­chine Learn­ing—Overview, Ap­pli­ca­tions, and Open Questions

the gears to ascension8 Dec 2022 18:01 UTC
8 points
0 comments1 min readLW link
(youtu.be)

[link, 2019] AI paradigm: in­ter­ac­tive learn­ing from un­la­beled instructions

the gears to ascension20 Dec 2022 6:45 UTC
2 points
0 comments2 min readLW link
(jgrizou.github.io)

Me­taphor.systems

the gears to ascension21 Dec 2022 21:31 UTC
25 points
9 comments1 min readLW link
(metaphor.systems)

Stop Talk­ing to Each Other and Start Buy­ing Things: Three Decades of Sur­vival in the Desert of So­cial Media

the gears to ascension8 Jan 2023 4:45 UTC
1 point
14 comments1 min readLW link
(catvalente.substack.com)

[talk] Os­bert Bas­tani—In­ter­pretable Ma­chine Learn­ing via Pro­gram Syn­the­sis—IPAM at UCLA

the gears to ascension13 Jan 2023 1:38 UTC
9 points
1 comment1 min readLW link
(www.youtube.com)

Call for sub­mis­sions: “(In)hu­man Values and Ar­tifi­cial Agency”, ALIFE 2023

the gears to ascension30 Jan 2023 17:37 UTC
29 points
4 comments1 min readLW link
(humanvaluesandartificialagency.com)

Hin­ton: “mor­tal” effi­cient ana­log hard­ware may be learned-in-place, uncopyable

the gears to ascension1 Feb 2023 22:19 UTC
10 points
3 comments1 min readLW link

[Question] If I en­counter a ca­pa­bil­ities pa­per that kinda spooks me, what should I do with it?

the gears to ascension3 Feb 2023 21:37 UTC
28 points
8 comments1 min readLW link