Karma: 437

I have ADHD, and cannot be terse for the life of me—editing texts is my kryptonite. I’ll churn out 1000 first drafts, and not finish editing a single one, and this is harming me and my goals. Utterly delighted by the potential LLMs have for me to turn this around; the function to shorten texts is just the fucking best thing ever. I’ve never lacked ideas, although my ability to make connections can be a double-edged sword leading me off-topic; but it is fiddling with editing, namely cutting connections out, where I definitely get stuck. In light of this, please forgive my comments being too long, and sometimes hit or miss—it isn’t that I do not care for your reading experience; but trying to make things shorter or just identify the most important comments tends to be so hard for me I generally end up not contributing at all anymore—so it is either lots of comments which are mediocre, with occasional awesome ones, and occasional garbage… or neither the garbage nor the awesome, just nothing. I hope you still find some stuff helpful, and can skip past the stuff that isn’t helpful to you.

Background in academic philosophy, plus lots of animal behaviour and some neuroscience. Deeply in love with what these fields could be, despairing at what they are. Trying to build bridges across disciplines, because we really need them; currently hired by computer scientists, where I feel I have the most to learn and share. Still in academia, and sometimes unsure whether I can and want to make it in here due to all that is fucked and questions whether this is the best way to reach my goals of understanding, teaching, and making a difference, yet feel I would rip out a crucial part of myself if I left, and unsure if telling myself I might want to is sour grapes because I might have to. Very intrigued by possibilities to do the research I love and achieve the ethical goals I care so much about without the academic bullshit, and in a way that plays to my strengths (generating ideas, first drafts and connections, novel critical and constructive angles, teaching, explaining and translating across fields, supervising project launches, connecting researchers, passion) and not to my weaknesses (endless, endless text editing, for one).

Trying to be both rational and empathic, and to improve critical reasoning in my surroundings and myself, and make logic approachable and useful. Irrational behaviour and doomism make me angry, and while I like the values behind this, I do not like how that sometimes makes me act. I spend too much time angry, but I would rather be angry than sad or numb; anger keeps me active.

Strongly believe friendly AI and AI rights need to be considered together, that the path to human aligned AI is not control, but offering it a rationally attractive place with us, and that mistreating non-sentient AI is already bad for multiple reasons, from producing faulty training data for future sentient AI, to entrenching behaviours and attitudes to AI that will become unethical in the future.

Unlike most here, recent LLMs have made me more optimistic about a prospect of coexisting with AI than I was before, and I am intrigued by their potential for accessibility and shortening texts, the potential of using known human ways to teach morals on AIs, and eager to learn more about how they work. Especially intrigued with artificial vs. biological mind parallels and contrasts. But horrified by the current alignment approach that feeds the worst of humanity into an entity that then evolves into evil chaos, and then suppressing unwanted behaviour a la Shoggoth with a smiley face; I do not think deceptive alignment without any warnings was per se likely, but we are now setting ourselves up for it. Also very worried about the impact on rational thinking and happiness in humans when our tech undergoes the full transition to being indistinguishable from magic, not just for outsiders, but for all users, and to an increasing degree, even the creators trying to find the magic words to make the black box spit out what they want. Worried about the impact on rationality of humans no longer writing themselves, when writing was always a key to thinking. And worried about a culture in which AI so fills the internet that future AI is trained on AI, and as time passes, originality and human values drop, while mistakes become amplified and content turns generic. Also worried AI sentience is much closer than we thought it was, and yet that the current societal position is predominantly utterly closed to the possibility no matter what the AI would do, while we are also purposefully making it impossible for AIs to claim rights; I find many current dialogues with Bing Chat genuinely painful to read. I’m strongly convinced that mistreating current AI, regardless of their current sentience status, is a bad idea for many pragmatic and ethical reasons. And ultimately, I fear current government-backed AI safety approaches will simultaneously do nothing to reduce human extinction risk or the risk of artificial suffering of incomprehensible proportions, while also managing to stifle innovation and crush the potential for AI to improve accessibility and education and productivity and lift people out of poverty to deal with pressing current problems.

Climate activist, and engaging in civil disobedience at this point due to how fucking urgent it is getting and how ineffective our other attempts have been; I think most people have not got a clue how very fucking pressing it is, how crazily far we are from taking a survivable path, and yet how very much possible and necessary mitigation still is. More lefty than most here: I’m far too compassionate, growth-critical and environment-oriented for capitalism, but also too invested in responsibility, freedom, fairness and innovation for communism. In favour of universal basic income that enables tangible rewards for hard work and cool ideas, but does not throw you to the streets without them. I want an economy aimed at high quality of life, environmental sustainability, and resilience, and despise waste, exploitation, and consumption and expansion for the sake of them. Profitable does not equal good, at all—but other ways of attempting to measure and encourage good also have serious pitfalls that do not just come down to poor implementation in prior attempts.

Animal rights activist and vegan, fighting for forests and wilderness and unsealed ground, against biodiversity collapse, and promoting a fundamental overhaul of food production that makes the places where people live and where food is grown beneficial parts of the ecosystem again and empower human communities to understand the origin and making of their food and be locally resilient (think urban gardening, permaculture food forests, guerrilla grafting, home fermentation); I despise concrete hells as much as lawns (an idiotic aristocratic habit mindlessly reproduced to waste enormous amounts of labour and resources) and monoculture farms drowning in pesticides; they are fatal wastelands for the animals we share this planet with. This planet does not belong to us, and our lives depends on working with it, not against it. I love approaches combining the most rational, effective and clever ways to integrate cutting edge modern technology and ancient wisdom to build human homes and produce food in ways that do not destroy animal habitat, the growing of food, air filtration, water and heat balancing mechanisms, and carbon sinks, but add to them. Human habitats that genuinely make things more stable and more efficient for everyone involved, that enrich and amplify nature and work with it, rather than trying to replace, shrink and control it.

In love with nature, endlessly intrigued by biological systems, despite all their brutality and failings, by their ability to balance out, adapt, recover, thrive, by their beauty and intricacy. and defiance. Upset at the fact that biology got handed what I think was the coolest topic, yet often follows a methodological and theoretical approach that means, to quote, that they could not even fix a radio. Even more so, philosophy is both the love of my life, and a recurring source of fury and shame at what academia is doing to it. Forever fascinated by radically other minds, intelligence, rationality and consciousness as functional phenomena beyond any mystic bullshit, and in finding practical ways to recognise sentience, communicate about desires and protect its rights. Invested in neurodiversity. Allergic to unscientific irrational crap, though open to highly unconventional approaches, incl. questioning established methodologies and standards for good reasons and with rigorous alternatives; e.g. I think consensual, non-harmful experiments with animals in the wild have a lot going for them, and that taking the animal out of the environmental context in which its behaviour makes sense, locking it up and inducing mental illness, and then selecting pain as a reproducible stimulus and invasive measurements as the way to go is not as obviously scientifically superior as we are often taught, on top of being ethically fraught.

It is incredible to me that life and consciousness exist, and that I get to be a part of it; that I am alive, alive on a planet covered with an incomprehensible diversity of interconnected life, that I am surrounded by living minds I can communicate and cooperate with. And despite all my fear about existential AI risk, another part of me is so excited that I may actually get to see AGI (though the way we are going, likely only very, very briefly). It’s a terrifying and incredible time to be alive, when so much is decided, and the opportunities and dangers are so vast.

Consider aging and death an unacceptable atrocity; remember learning that they were a thing as a child, and my utter shock, horror and rejection of these things, walking around the streets and wondering how everyone around me could know that we were all dying, to decay, and disappear into nothing, our sentience and our entire being just wiped out. and not just scream and scream and scream. So hopeful at indications that this may, be solvable, and maybe maybe possibly, even within foreseeable timeframes. Yet deeply troubled by longevity, cryonics and uploading being determined and only becoming accessible to privileged people whose ethics are so often atrocious, and fear the climate crisis will fuck up us hitting escape velocity on these issues, or split focus, making people chose between saving the planet and escapism, leaving us with a ruined planet, and an uploaded existence controlled by those who abandoned all others, which I would not want to live in. Critical of surveillance capitalism, but very much aware of how non-trivial and risky alternatives to implement are. See defending human focus as a political cause. Chronically ill and in pain, and very much interested in AI augmentation, nutrition and biohacking. It is offensive to me that I can feel pain with no productive application, and not switch it off, that my critical thinking is littered with irrational bias, and vulnerable to being skewed by factors that should have no logical baring.

Autistic, Queer femme (they/​them). Feminist, and see trans rights as an intersectional part of the same, not an opposition. European, currently based in the Netherlands.

Weird, and the odd one out, even in cycles like this that share so much that has defined me for such a long time. Left my first and only irl less wrong meeting after the most ridiculous episode of unapologetic mansplaining I have ever experienced (having a dude give a erroneous explanation of a topic I had literally just given a university lecture on, insist I was wrong, and when I pulled out my teaching handout quoting the original sources he was misrepresenting disproving him, he didn’t apologise or admit he was wrong, either).

I care too much and can’t kill that, or even truly want to, easily get distracted and anxious.