RSS

the gears to ascension

Karma: 4,402

I go by “Lauren (often wrong)” on most public websites these days, eg bluesky, inspired by Often Wrong Soong, Data’s creator in Star Trek.

I want literally every human to get to go to space often and come back to a clean and cozy world.

[updated 202303] Mad Librarian. Bio overview: Crocker’s Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

Please critique eagerly—I try to accept feedback/​Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I’m unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.

.… We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….

I’m self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks’ A*. I don’t defer on timelines at all—my view is it’s obvious to any who read enough research what big labs’ research plans must be to make progress, just not easy to agree on when they’ll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It’s why I call myself a librarian.

Let’s speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don’t get yourself in denial thinking it’s impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven’t figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let’s see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.

.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

some current favorite general links (somewhat related to safety, but human-focused):

More about me:

:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic’s soul. ..:

(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it’s quite good, apologies for trivial typos!)

Vir­tu­ally Ra­tional—VRChat Meetup

28 Jan 2024 5:52 UTC
25 points
3 comments1 min readLW link

Global LessWrong/​AC10 Meetup on VRChat

24 Jan 2024 5:44 UTC
15 points
2 comments1 min readLW link

A cou­ple in­ter­est­ing up­com­ing ca­pa­bil­ities workshops

the gears to ascension29 Nov 2023 14:57 UTC
9 points
2 comments1 min readLW link

Paper: “FDT in an evolu­tion­ary en­vi­ron­ment”

the gears to ascension27 Nov 2023 5:27 UTC
26 points
46 comments1 min readLW link
(arxiv.org)

“Benev­olent [ie, Ruler] AI is a bad idea” and a sug­gested alternative

the gears to ascension19 Nov 2023 20:22 UTC
22 points
11 comments1 min readLW link
(www.palladiummag.com)

the gears to as­cen­scion’s Shortform

the gears to ascension14 Aug 2023 15:35 UTC
6 points
253 comments1 min readLW link

A bunch of videos in comments

the gears to ascension12 Jun 2023 22:31 UTC
10 points
62 comments1 min readLW link

gamers be­ware: mod­ded Minecraft has new malware

the gears to ascension7 Jun 2023 13:49 UTC
14 points
5 comments1 min readLW link
(github.com)

“Mem­branes” is bet­ter ter­minol­ogy than “bound­aries” alone

28 May 2023 22:16 UTC
29 points
12 comments3 min readLW link

“A Note on the Com­pat­i­bil­ity of Differ­ent Ro­bust Pro­gram Equil­ibria of the Pri­soner’s Dilemma”

the gears to ascension27 Apr 2023 7:34 UTC
18 points
5 comments1 min readLW link
(arxiv.org)

[Question] Did the fonts change?

the gears to ascension21 Apr 2023 0:40 UTC
2 points
1 comment1 min readLW link

“warn­ing about ai doom” is also “an­nounc­ing ca­pa­bil­ities progress to noobs”

the gears to ascension8 Apr 2023 23:42 UTC
16 points
5 comments3 min readLW link

“a di­alogue with my­self con­cern­ing eliezer yud­kowsky” (not au­thor)

the gears to ascension2 Apr 2023 20:12 UTC
13 points
18 comments3 min readLW link

A bunch of videos for in­tu­ition build­ing (2x speed, skip ones that bore you)

the gears to ascension12 Mar 2023 0:51 UTC
72 points
5 comments4 min readLW link

To MIRI-style folk, you can’t simu­late the uni­verse from the beginning

the gears to ascension1 Mar 2023 21:38 UTC
2 points
19 comments2 min readLW link

How to Read Papers Effi­ciently: Fast-then-Slow Three pass method

25 Feb 2023 2:56 UTC
34 points
4 comments4 min readLW link
(ccr.sigcomm.org)

Hunch seeds: Info bio

the gears to ascension17 Feb 2023 21:25 UTC
12 points
0 comments9 min readLW link

[Question] If I en­counter a ca­pa­bil­ities pa­per that kinda spooks me, what should I do with it?

the gears to ascension3 Feb 2023 21:37 UTC
28 points
8 comments1 min readLW link

Hin­ton: “mor­tal” effi­cient ana­log hard­ware may be learned-in-place, uncopyable

the gears to ascension1 Feb 2023 22:19 UTC
10 points
3 comments1 min readLW link

Call for sub­mis­sions: “(In)hu­man Values and Ar­tifi­cial Agency”, ALIFE 2023

the gears to ascension30 Jan 2023 17:37 UTC
29 points
4 comments1 min readLW link
(humanvaluesandartificialagency.com)