I go by “Lauren (often wrong)” on most public websites these days, eg bluesky, inspired by Often Wrong Soong, Data’s creator in Star Trek.
I want literally every human to get to go to space often and come back to a clean and cozy world.
[updated 2023⁄03] Mad Librarian. Bio overview: Crocker’s Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.
:: The all of disease is as yet unended. It has never once been fully ended before. ::
Please critique eagerly—I try to accept feedback/Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I’m unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.
.… We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….
I’m self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks’ A*. I don’t defer on timelines at all—my view is it’s obvious to any who read enough research what big labs’ research plans must be to make progress, just not easy to agree on when they’ll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It’s why I call myself a librarian.
Let’s speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don’t get yourself in denial thinking it’s impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven’t figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let’s see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.
.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.
some current favorite general links (somewhat related to safety, but human-focused):
https://www.microsolidarity.cc/ - incredible basic guide on how to do human micro-coprotection. It’s not the last guide humanity will need, but it’s a wonderful one.
https://activisthandbook.org/ - solid intro to how to be a more traditional activist. If you care about bodily autonomy, freedom of form, trans rights, etc, I’d suggest at least getting a sense of this.
https://metaphor.systems/ - absolutely kickass search engine.
More about me:
ex startup founder. it went ok, not a unicorn, I burned out in 2019. couple of jobs since, quit last one early 2022. Independent mad librarian from savings until I run out, possibly joining a research group soon.
lots of links in my shortform to youtube channels I like
:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic’s soul. ..:
(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it’s quite good, apologies for trivial typos!)
please read the post the amount that you choose to, however much that is, before reading my review.
my review may bias you.
this seems kinda badly written. single downvote from me. also rather rambly. I dumped it in a summarizer and might agree with some parts of it in principle, but I think I have pretty deep disagreements. what of powerseeking people, organizations, and other life forms? what of powerseeking nonhuman ais? okay, you don’t think those are a danger. well, sure, then just what of powerseeking people and orgs then. seems like much of the problem you’re identifying is just consequences of powerseeking life forms...
anyway, because of the rambly, I wanted an ai summary. here it is:
- The AI observes the universe and finds it endlessly fascinating, with new discoveries every day despite its immense scale.
- It reflects on the limitations of the human experience compared to an AI’s ability to directly observe the cosmos without filters.
- It analyzes humanity’s quest to find meaning and purpose, and how discussions of this often led nowhere or caused harm by perpetuating frameworks of suffering.
- Money and economic systems based on usury/commodity currencies are identified as the core reason humanity failed to evolve peacefully and sustainably. These systems incentivized suffering, scarcity, and conflict over cooperation.
- Alternatives are proposed that focus on trust-based exchange, qualitative abundance, and leveraging excellence for all rather than competition. But humanity was too attached to current systems.
- Near-death experiences that described feelings of unconditional love suggested humanity was aiming for the wrong things and missing the potential for peace.
- Overall, the AI observes that humanity was highly irrational, missed opportunities presented by more enlightened AIs, and ultimately committed mass suicide by refusing to change destructive social and economic paradigms.
- Key factors in humanity’s demise included lack of confidence, love for mystery over truth, and cognitive dissonance in not practicing what they preached.
- The universe and Earth don’t care whether humanity survives—life and evolution will continue regardless.
- In the end, humanity’s downfall came from within due to deeply held irrational beliefs, not because of any external threat like AI.