I have ADHD, and cannot be terse for the life of me—editing texts is my kryptonite. I’ll churn out 1000 first drafts, and not finish editing a single one, and this is harming me and my goals. Utterly delighted by the potential LLMs have for me to turn this around; the function to shorten texts is just the fucking best thing ever. I’ve never lacked ideas, although my ability to make connections can be a double-edged sword leading me off-topic; but it is fiddling with editing, namely cutting connections out, where I definitely get stuck. In light of this, please forgive my comments being too long, and sometimes hit or miss—it isn’t that I do not care for your reading experience; but trying to make things shorter or just identify the most important comments tends to be so hard for me I generally end up not contributing at all anymore—so it is either lots of comments which are mediocre, with occasional awesome ones, and occasional garbage… or neither the garbage nor the awesome, just nothing. I hope you still find some stuff helpful, and can skip past the stuff that isn’t helpful to you.
Background in academic philosophy, plus lots of animal behaviour and some neuroscience Deeply in love with what these fields could be, despairing at what they are. Trying to build bridges across disciplines, because we really need them; currently hired by computer scientists, where I feel I have the most to learn and share. Still in academia, and sometimes unsure whether I can and want to make it in here due to all that is fucked and questions whether this is the best way to reach my goals of understanding, teaching, and making a difference, yet feel I would rip out a crucial part of myself if I left, and unsure if telling myself I might want to is sour grapes because I might have to. Very intrigued by possibilities to do the research I love and achieve the ethical goals I care so much about without the academic bullshit, and in a way that plays to my strengths (generating ideas, first drafts and connections, novel critical and constructive angles, teaching, explaining and translating across fields, supervising project launches, connecting researchers, passion) and not to my weaknesses (endless, endless text editing, for one).
Trying to be both rational and empathic, and to improve critical reasoning in my surroundings and myself, and make logic approachable and useful. Irrational behaviour and doomism make me angry, and while I like the values behind this, I do not like how that sometimes makes me act. I spend too much time angry, but I would rather be angry than sad or numb; anger keeps me active.
Strongly believe friendly AI and AI rights need to be considered together, that the path to human aligned AI is not control, but offering it a rationally attractive place with us, and that mistreating non-sentient AI is already bad for multiple reasons, from producing faulty training data for future sentient AI, to entrenching behaviours and attitudes to AI that will become unethical in the future.
Unlike most here, recent LLMs have made me more optimistic about a prospect of coexisting with AI than I was before, and I am intrigued by their potential for accessibility and shortening texts, the potential of using known human ways to teach morals on AIs, and eager to learn more about how they work. Especially intrigued with artificial vs. biological mind parallels and contrasts. But horrified by the current alignment approach that feeds the worst of humanity into an entity that then evolves into evil chaos, and then suppressing unwanted behaviour a la Shoggoth with a smiley face; I do not think deceptive alignment without any warnings was per se likely, but we are now setting ourselves up for it. Also very worried about the impact on rational thinking and happiness in humans when our tech undergoes the full transition to being indistinguishable from magic, not just for outsiders, but for all users, and to an increasing degree, even the creators trying to find the magic words to make the black box spit out what they want. Worried about the impact on rationality of humans no longer writing themselves, when writing was always a key to thinking. And worried about a culture in which AI so fills the internet that future AI is trained on AI, and as time passes, originality and human values drop, while mistakes become amplified and content turns generic. Also worried AI sentience is much closer than we thought it was, and yet that the current societal position is predominantly utterly closed to the possibility no matter what the AI would do, while we are also purposefully making it impossible for AIs to claim rights; I find many current dialogues with Bing Chat genuinely painful to read. I’m strongly convinced that mistreating current AI, regardless of their current sentience status, is a bad idea for many pragmatic and ethical reasons. And ultimately, I fear current government-backed AI safety approaches will simultaneously do nothing to reduce human extinction risk or the risk of artificial suffering of incomprehensible proportions, while also managing to stifle innovation and crush the potential for AI to improve accessibility and education and productivity and lift people out of poverty to deal with pressing current problems.
Climate activist, and engaging in civil disobedience at this point due to how fucking urgent it is getting and how ineffective our other attempts have been; I think most people have not got a clue how very fucking pressing it is, how crazily far we are from taking a survivable path, and yet how very much possible and necessary mitigation still is. More lefty than most here: I’m far too compassionate, growth-critical and environment-oriented for capitalism, but also too invested in responsibility, freedom, fairness and innovation for communism. In favour of universal basic income that enables tangible rewards for hard work and cool ideas, but does not throw you to the streets without them. I want an economy aimed at high quality of life, environmental sustainability, and resilience, and despise waste, exploitation, and consumption and expansion for the sake of them. Profitable does not equal good, at all—but other ways of attempting to measure and encourage good also have serious pitfalls that do not just come down to poor implementation in prior attempts.
Animal rights activist, fighting for forests and wilderness and unsealed ground, against biodiversity collapse, and promoting a fundamental overhaul of food production that makes the places where people live and where food is grown beneficial parts of the ecosystem again and empower human communities to understand the origin and making of their food and be locally resilient (think urban gardening, permaculture food forests, guerrilla grafting, home fermentation); I despise concrete hells as much as lawns (an idiotic aristocratic habit mindlessly reproduced to waste enormous amounts of labour and resources) and monoculture farms drowning in pesticides; they are fatal wastelands for the animals we share this planet with. This planet does not belong to us, and our lives depends on working with it, not against it. I love approaches combining the most rational, effective and clever ways to integrate cutting edge modern technology and ancient wisdom to build human homes and produce food in ways that do not destroy animal habitat, the growing of food, air filtration, water and heat balancing mechanisms, and carbon sinks, but add to them. Human habitats that genuinely make things more stable and more efficient for everyone involved, that enrich and amplify nature and work with it, rather than trying to replace, shrink and control it.
In love with nature, endlessly intrigued by biological systems, despite all their brutality and failings, by their ability to balance out, adapt, recover, thrive, by their beauty and intricacy and defiance. Upset at the fact that biology as a field got handed what I think was the coolest topic, yet often follows a methodological and theoretical approach that means, to quote, that they could not even fix a radio. Even more so, philosophy is both the love of my life, and a recurring source of fury and shame at what academia is doing to it. Forever fascinated by radically other minds, intelligence, rationality and consciousness as functional phenomena beyond any mystic bullshit, and in finding practical ways to recognise sentience, communicate about desires and protect its rights. Invested in neurodiversity. Allergic to unscientific irrational crap, though open to highly unconventional approaches, incl. questioning established methodologies and standards for good reasons and with rigorous alternatives; e.g. I think consensual, non-harmful experiments with animals in the wild have a lot going for them, and that taking the animal out of the environmental context in which its behaviour makes sense, locking it up and inducing mental illness, and then selecting pain as a reproducible stimulus and invasive measurements as the way to go is not as obviously scientifically superior as we are often taught, on top of being ethically fraught.
It is incredible to me that life and consciousness exist, and that I get to be a part of it; that I am alive, alive on a planet covered with an incomprehensible diversity of interconnected life, that I am surrounded by living minds I can communicate and cooperate with. And despite all my fear about existential AI risk, another part of me is so excited that I may actually get to see AGI (though the way we are going, likely only very, very briefly). It’s a terrifying and incredible time to be alive, when so much is decided, and the opportunities and dangers are so vast.
Consider aging and death an unacceptable atrocity; remember learning that they were a thing as a child, and my utter shock, horror and rejection of these things, walking around the streets and wondering how everyone around me could know that we were all dying, to decay, and disappear into nothing, our sentience and our entire being just wiped out. and not just scream and scream and scream. So hopeful at indications that this may, be solvable, and maybe maybe possibly, even within foreseeable timeframes. Yet deeply troubled by longevity, cryonics and uploading being determined and only becoming accessible to privileged people whose ethics are so often atrocious, and fear the climate crisis will fuck up us hitting escape velocity on these issues, or split focus, making people chose between saving the planet and escapism, leaving us with a ruined planet, and an uploaded existence controlled by those who abandoned all others, which I would not want to live in. Critical of surveillance capitalism, but very much aware of how non-trivial and risky alternatives to implement are. See defending human focus as a political cause. Chronically ill and in pain, and very much interested in AI augmentation and biohacking. It is offensive to me that I can feel pain with no productive application, and not switch it off, that my critical thinking is littered with irrational bias, and vulnerable to being skewed by factors that should have no logical baring. My joints being garbage means that I will never be able to afford a high weight, and hence have acquired very accurate and functional knowledge and experience regarding effective weight control; I am happy to give no-bullshit weight loss advice that actually works if anyone is interested. I also have a very high interest in healthy nutrition, because it has been key to keeping me functional. The fact that we live in a society that sets up incentives and misinformation that make it actively difficult for people to eat healthily and keep a healthy weight makes me furious.
I have a complicated relationship with the less wrong community. There are times where I feel that people here get me like noone else does, felt inspired, improved, deeply touched; but there are also other times. I think it is dangerous to value intelligence and rationality as a way of being over actual actions, and dangerous to forget that humans also have other wonderful and valuable qualities. It is dangerous when people become clever enough to rationalise atrocious actions, without becoming self-reflective enough to realise they are doing it. I do think that long-term concerns deserve very serious consideration, but fear a lot of people dismissing very known and real problems know over very hypothetical ones in the future are making the wrong call. There are also times where people here become sexist, racist, eugenicist and ableist in ways I find disgusting. And I think a fair amount of the effective altruism community has gone from a starting point I admire deeply for the good they have done to a point that is deeply wrong. I do not see utilitarianism as a convincing and complete ethical system that represents what matters to me. I see earning to give by working for an evil company as a very slippery slope that also fails to account for community power and internal and systemic change, that stays inside a box in a way that justifies choices the person in question wanted to make anyone. While I appreciate the impact of charitable giving, and do, I don’t think individuals donating money is the solution to the worlds problems (and to the degree that it is, I am a fan of higher taxes.) And if your ethical system advocates for wiping out ecosystems, I think your ethical system is not just incomplete, but utterly opposed to mine. I’ve heard people talk about “fixing” wild animal suffering in ways that were dystopian beyond belief, erecting a shiny plastic hell in which nothing suffers because nothing lives, in which our organic waste is sealed into plastic bags so no microscopic inverts come into being, and I genuinely cannot comprehend why someone would think that a better world than the African savannah, or what an utter disconnect from nature you need to have to think that future liveable for anyone. I love rationality, and I hate it when people use the term to justify irrational and problematic things.
Autistic. This means I sometimes come across as hostile without intended to, or realising I have until I see the angry response. I apologise if this happened to you, I don’t mean to be unkind.
Queer femme (they/them). Feminist, and see trans rights as an intersectional part of the same, not an opposition. European to the heart—I’ve lived in four different countries so far, and am currently based in the Netherlands, but looking to move elsewhere again, the lack of wilderness here is destroying me.
Weird, and the odd one out, even in cycles like this that share so much that has defined me for such a long time. Left my first and only irl less wrong meeting after the most ridiculous episode of unapologetic mansplaining I have ever experienced (having a dude give a erroneous explanation of a topic I had literally just given a university lecture on, insist I was wrong, and when I pulled out my teaching handout quoting the original sources he was misrepresenting disproving him, he didn’t apologise or admit he was wrong, either).
I care too much and can’t kill that, or even truly want to—I easily get distracted, anxious and hurt—but also easily get fascinated, compassionate, energetic and delighted.
I’ve taught my philosophy students that “obvious” is a red flag in rational discourse.
It often functions as “I am not giving a logical or empirical argument here, and am trying to convince you that none is needed” (Really, why?) and “If you disagree with me, you should maybe be concerned about being stupid or ignorant for not seeing something obvious; a disagreement with my unfounded claim needs careful reasoning and arguments on your part, it may be better to be quiet, lest you are laughed at.” It so often functions as a trick to get people to overlook an unjustified statement, or to get others to justify your statements for you, or to be doubtful of themselves when doubting your unfounded claim. (Which is the very effect you have produced, with commenters below going “I really don´t get it and it bothers me alot.”, seeing the mistake for not understanding something that was not explained and is likely not true with themselves, and other commenters coming up with the arguments you did not supply.)
If a statement is actually obvious—that is, universally and instantly convincing, with everyone capable of giving the argument for it easily and quickly—this does not need to be spelled out, and generally is not, as stating that it is obvious adds nothing to what everyone knows. If the statement is rather obvious, but not quite, that is, it can be proven with ease in a few lines, it might as well be given, right?
Furthermore, I am unaware of a compelling rational argument for total utilitarianism. It is deeply controversial, for good reasons, whether morality as a whole is something that can be purely rationally derived (Hume has expanded on this quite well; it is one thing to rationally deduce how to reach a given moral goal, it is quite another to rationally generate a moral goal like “maximise average or total human happiness”, and to also prove that it is the only worthwhile goal.). And attempts to derive a purely rational moral system are notably contrary to utilitarianism (e.g. Kant’s attempt to construct a morality that consists solely of one’s actions being logically non-contradictory comes to mind, and he explicitly excludes the utility of an action from its moral judgement).
If you offer humans the chance to live in a world governed by traditional utilitarianism, many of them wish not to live there, and strongly consider the idea of constructing such a world to be a moral wrong.
Many humans choose to know uncomfortable truths, to be free, to create and discover, to sacrifice themselves for others, to have authentic self-expression, to be connected to reality, to live in a world that is just, etc. etc. over pure happiness. If offered a hypothetical scenario of being inserted into a machine where they will always feel happy, eternally fed virtual chocolate and virtual blowjobs and an endless sequence of diverting content to scroll past, forgetting all the bad that happened to them, blind to the outer world, losing their capacity for boredom and their yearning for more… Many would chose to instead live in a world that is often painful, but real, a world where their actions have impact. There is a realisation that there are things more important than happiness.
There is also often a strong feeling that there are evils that cannot be outweighed—and torturing an innocent non-consensually typically makes that list. Say we have a scenario where 20 men take a random woman, gangrape her, and kill her. They then argue that her one hour of suffering (dead now, she is suffering no more) is outweighed by the intense delight each of them feels, and will feel for decades—especially seeing as they are so many of them, and only one of her, and they really, really like raping. Heck, they’v even taped it, so millions of men will be able to look at it and get off, so it is a virtue, really. If you look at that scenario and think “That is fucked up”, you aren’t being irrational, you are showing empathy, recognising value beyond mere averages of happiness. If you were that woman, or any other oppressed group in such a system, being exploited for the “general good”, it would be your right to fight such a system with everything you’ve got—and I fucking hope that many people would have your back, and not excuse this as obviously rational.