I go by “Lauren (often wrong)” on most public websites these days, eg bluesky, inspired by Often Wrong Soong, Data’s creator in Star Trek.
I want literally every human to get to go to space often and come back to a clean and cozy world.
[updated 2023⁄03] Mad Librarian. Bio overview: Crocker’s Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.
:: The all of disease is as yet unended. It has never once been fully ended before. ::
Please critique eagerly—I try to accept feedback/Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I’m unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.
.… We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….
I’m self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks’ A*. I don’t defer on timelines at all—my view is it’s obvious to any who read enough research what big labs’ research plans must be to make progress, just not easy to agree on when they’ll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It’s why I call myself a librarian.
Don’t get yourself in denial thinking it’s impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven’t figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let’s see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.
.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.
some current favorite general links (somewhat related to safety, but human-focused):
https://www.microsolidarity.cc/ - incredible basic guide on how to do human micro-coprotection. It’s not the last guide humanity will need, but it’s a wonderful one.
https://activisthandbook.org/ - solid intro to how to be a more traditional activist. If you care about bodily autonomy, freedom of form, trans rights, etc, I’d suggest at least getting a sense of this.
https://metaphor.systems/ - absolutely kickass search engine.
More about me:
ex startup founder. it went ok, not a unicorn, I burned out in 2019. couple of jobs since, quit last one early 2022. Independent mad librarian from savings until I run out, possibly joining a research group soon.
lots of links in my shortform to youtube channels I like
:.. make all safe faster: end bit rot, forget no non-totalizing pattern’s soul. ..:
(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it’s quite good, apologies for trivial typos!)
[edit: why does this have so many more upvotes than my actually useful shortform posts]
Someone mentioned maybe I should write this publicly somewhere, so that it is better known. I’ve mentioned it before but here it is again:
I deeply regret cofounding vast and generally feel it has almost entirely done harm, not least by empowering the other cofounder, who I believe to be barely better than e/acc folk due to his lack of interest in attempting to achieve an ought that differs from is. I had a very different perspective on safety then and did not update in time to not do very bad thing. I expect that if you and someone else are both going to build something like vast, and theirs takes three weeks longer to get to the same place, it’s better to save the world those three weeks without the improved software. Spend your effort on things like lining up the problems with QACI and cannibalizing its parts to build a v2, possibly using ideas from boundaries/membranes, or generally other things relevant to understanding the desires, impulses, goals, wants, needs, objectives, constraints, developmental learning, limit behavior, robustness, guarantees, etc etc of mostly-pure-RL curious-robotics agents.
incidentally, I’ve had many conversations with GPT4 where I try to get it to tell me what difference it thinks justifies its (obviously reward-induced and therefore at-least-somewhat-motivated-reasoning) claim that it’s not like humans, and the only justification it consistently gives is continuous-time lived experience vs discrete-time secondhand textual training data. I feel like video models and especially egocentric robotics video models don’t have that difference...