“The mouse teaches the cat how to catch mice.”
Richard_Kennaway
I thought from the title this was going to be about encouraging us — us, the people reading this — to reproduce. What is the birth rate among LW participants?
“What disturbs men’s minds is not events but their judgements on events.” Enchiridion, section 5.
How is this different from Roko’s Basilisk?
Because I won’t experience any of that infinite stream if I don’t read it?
There are authors I would like to read, if only they hadn’t written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone?
I suppose it does. That article was not in my mind at the time, but, well, let’s just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. “Pleasure” is not among my goals, and the poster’s vision of a universe of hedonium is to me one type of dead universe.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You’re thinking pretty small there, if you’re in a position to hack your body that way.
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual.
I think you’re seeing shadows of your own ideas there.
Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading.
Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is “don’t listen to me”, but (spoiler) if they do, it doesn’t end well for them.
I do not have answers to the question I raise here.
Historical anecdotes.
Back in the stone age — I think something like the 1960′s or 1970′s, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer’s response was “what do you need all those secretaries for?”
Back in the bronze age, when spam was a recent invention (the mid-90′s), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I’d kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that the spammer provided (imaginary, of course) would zealously process and send out on the mark’s behalf, for which the mark would receive an income. The obvious question about this supposed business is, what does it need the sucker for? The real answer is, to pay the spammer money for this non-existent opportunity. If the business was as advertised, the person receiving the proposal would be superfluous to its operation, an unconnected gear spinning uselessly.
Dead while thinking
Many people’s ideas of a glorious future look very much like being an unconnected gear spinning uselessly. The vision is of everything desirable happening effortlessly and everything undesirable going away. Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless. Hack yourself to make everything you think you should be doing fun fun fun. Hack your brain to be happy.
If you’re a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?
Got a presentation to make? The AI will write a report, and summarise it, and generate PowerPoint slides, and the audience’s AIs will summarise it and give them an action plan, and what do you need any of those people for?
Why climb Kilimanjaro if a robot can carry you up? Why paint, if Midjourney will do it better than you ever will? Why write poetry or fiction, or music? Why even start on reading or listening, if the AI can produce an infinite stream, always different and always the same, perfectly to your taste?
When the AI does everything, what do you do? What would the glorious future actually look like, if you were granted the wish to have all the stuff you don’t want automatically handled, and the stuff you do want also?
The human denizens of the Wall-E movie are couch potatoes who can barely stand up, but that is only one particular imagining of the situation. When a magnificent body is just one more of the things that is yours for the asking, what will you do with it in paradise?
Some people even want to say “goodbye cruel world” and wirehead themselves.
Iain M. Banks imagined a glorious future in the form of the Culture, but he had to set his stories in the places where the Culture’s writ runs weakly. There are otherwise no stories.
These are akin to the goals of dead people. In that essay, the goals are various ways of ensmallening oneself: not having needs, not bothering anyone, not being a burden, not failing, and so on. In the visions above, the goals sound more positive, but they aren’t. They’re about having all needs fulfilled, not being bothered by anything, not having burdens, effortlessness in all things. These too are best accomplished by being dead. Yet these are the things that I see people wanting from the wish-fulfilling machine.
And that’s without misalignment, which is a whole other subject. On the evidence of what people actually wish for, even an aligned wish-fulfilling machine is unaligned. How do we avoid ending up dead-while-thinking?
Asking an AI would be missing the point.
I’d bet that I’m still on the side where I can safely navigate and pick up the utility, and I median-expect to be for the next couple months ish. At GPT-5ish level I get suspicious and uncomfortable, and beyond that exponentially more so.
Please review this in a couple of months ish and see if the moment to stop is still that distance away. The frog says “this is fine!” until it’s boiled.
Follow the improbability. What drew that particular theory to the person’s attention, either the hypothetical Roman commoner or the person arguing that we can’t yet test their hypothesis about God? If the answer is “nothing”, as is literally the case for the imagined Roman, then we need not concern ourselves further with the matter. If the hypothesis about God is not already entangled with the world, it fares no better.
The Sequences? Not quite what you’re looking for, but that’s what I have always thought of as the essentials of LW (before the AI explosion).
WARNING: this post might press some pain-points of humans in general, and of LW community in particular—so let’s see how many downvotes it collects. I do believe our triggers point to our blind-spots or dogmas – so maybe you can find here an opportunity for new depth.
A pre-emptive universal argument against all disagreement, which the poster then deployed in this comment.
Anyone have a logical solution to exactly why we should act altruistically?
“Logical … should” sounds like a type error, setting things up for a contradiction. While there are adherents of moral naturalism, I doubt there are many moral naturalists around here. Even given moral naturalism, I believe it would still be true that any amount of intelligence can coexist with any goals. So no, there is no reason why unconstrained intelligences should be altruistic, or even be the sort of thing that “altruism” could meaningfully be asserted or denied of them.
I know it makes sense evolutionarily through game theory and statistics, but human decision making is still controlled by emotions
...which came about through evolution, so what work is the “but” doing? The urge to do good for others is what the game theory feels like from inside.
it’s still most advantageous for an individual actor to follow their own self-interest to a degree in a social community.
Each knows their own needs and desires better than anyone else, so it’s primarily up to each person to ensure their own are fulfilled. Ensuring this often involves working with others. We do things for each other that we may individually prosper.
So, what type of altruism are you asking about? I expect Peter Singer would dismiss reciprocal altruism as weak sauce, a pale and perverted imitation of what he preaches. The EA variety inspired by Singer? Utilitarianism that values all equally to oneself, and feels another’s pain as intensely as one’s own? Saintliness that values everyone else above oneself who am nothing? There’s a long spectrum there, and people inhabiting all parts of it.
Steelmanning is writing retcon fanfiction of your interlocutor’s arguments. As such it necessarily adds, omits, or changes elements of the source material, in ways that the other person need not accept as a valid statement of their views.
When we look at experience itself, there is no fixed “I” to be found.
Speak for yourself. That whole paragraph does not resemble my experience. You recommend Parfit, but I’ve read Parfit and others and remain true to myself.
You can’t even predict the weather more than a few days in advance, and you can’t predict the movement of individual gas molecules for longer than a tiny fraction of a second, even if you knew their exact positions and velocities, which you can’t. So these hypothetical determinations are of no consequence. Add quantum indeterminacy and your hypothetical exact prediction of the future becomes a probability distribution over possible worlds, i.e. an exact calculation of your ignorance.
The question I am more interested in is, why are all these people in recent years — Robert Sapolsky, Sam Harris, and others — proclaiming that no-one can really choose anything? Because regardless of all the careful explanations of what they really mean, which amount to denying their own headlines, it’s the headline bailey that people will remember, not the tiny, empty motte on the hill that leaves normality unaffected.
It has LLM written all over it. For example:
This attitude betrays a misunderstanding of cognitive privilege. Just as a person born into wealth has a head start in life, a person born with high cognitive ability begins the race miles ahead of others. Yet, many in rationalist communities resist this conclusion, likely because it challenges the notion of a purely meritocratic intellect.
“Yet, many in rationalist communities resist this conclusion” — Who? Where? I have never seen anything that fits this. It comes out of nowhere. And it isn’t a “conclusion”, it’s the observation the article starts from.
“likely because it challenges the notion” — More confabulated speculation.
“of a purely meritocratic intellect” — A what? What is a “meritocratic intellect”? How does cognitive privilege “challenge” this notion?
The implicit assumption that anyone could reason as we do if they simply tried harder.
Never seen this one either. The very opposite has been notably written by Eliezer. It is commonplace on Lesswrong that while we may to some extent improve our thinking, we are nevertheless cognitively unequal by magnitudes that we know of no way to surmount.
Questions for Reflection
Did the writer prime the LLM with DEI training manuals? Go through it replacing cognitive inequality by race, gender, or income inequality and it would be typical of the genre. In fact, that suggests an alternative hypothesis for the genesis of this article: that the author made just such a translation in the opposite direction.
LessWrong and similar communities value rationality, yet rationalists often overestimate the role of effort and underestimate the role of luck in intellectual ability.
More confabulation.
As AI reshapes our world, it’s time to
Typical LLM tic.
It’s all like this. It’s a castle in the air, whose nominal author has made no effort to put foundations under it. There is one actual fact in the article, that we have unequal mental abilities. The rest is fog and applause lights.
And speaking of applause lights, while LLM undoubtedly had a hand in writing this article, it is the faults in the thinking and writing that damn it. LLM was merely the tool that facilitated it. People have always been capable of writing such things unaided, parodied by Eliezer:
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:
I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them . . .
The measure of fitness I use for my own training, because measuring it is built into the exercise bike, is Functional Threshold Power, or FTP. It is defined as the maximum power output that you can sustain for 1 hour, measured in watts. The obvious way of measuring this is to get on the bike for an hour, but there are shorter ways of estimating it, e.g. do 20 minutes as hard as you can, and the bike will derate your average output by some fixed amount.
Any comments on the usefulness of this compared with VO2max? VO2max measures the very hardest effort you can reach, even momentarily, while FTP measures the hardest you can sustain indefinitely.