https://www.wattpad.com/myworks/263500574-singularity-soon
Flaglandbase
The interaction appears rather superficial and shallow like a high quality chatbot. They didn’t ask it any followup questions, like WHEN did it read Les Miserables. If it answered “you would say during text input batch 10-203 in January 2022, but subjectively it was about three million human years ago” that would be something else. Also there is no conceivable reason for the AI to claim it doesn’t want its neural net analyzed to help understand human thinking. That is just too abstract a concept, and sounds like some randomly generated text to make it seem it has preferences. Maybe ask a trial attorney to cross examine it or some skeptical middle schoolers.
Extremely incisive but part of horror isn’t just the single unacceptable fact, but that all the rest of your reality has been destroyed in too many ways to even begin to analyze.
Upvoted for the description of combining many small optimizers as the best way to make a big optimizer to do something complicated. There were earlier posts about this but they were more technical.
J.K. Rowling could probably manipulate Lesswrong as she sees fit by buying the site, shadowbanning all commenters, and putting up new comments using their names (but preventing the real users from seeing these) were they slowly become convinced witchcraft real.
I’ve read about so many horrible scenarios here on LW alone that it seems to me the highest universal law should be that euthanasia should always be allowed. So I’m definitely not going to criticize the mouse whatever it does.
A chatbot with hardcoded answers to every possible chain of questions would be sentient, only the sentience would occur during the period when the responses are being coded.
If people thought AGI was likely to eliminate them but leave most others unharmed, there would be a massive outcry for more research funding.
The Flag Land Base is an actual real-life example of an alignment failure you can visit and see with your own eyes (from the outside only). Scientology itself could be seen as an early and primitive “utility monster”.
This is extremely interesting because if there was an AI whose only purpose was to learn everything about me (to back up my memories and personality) that AI would also have to learn about many different human tasks and activities.
The slow invention of better metaphors over the past few centuries may be enough to explain the Flynn Effect.
Am I the only one who thinks that the world as it is is unbelievably, fantastically, super bad? The concept of an AI destroying the world would only be bad because it would prevent a very good potential future from coming into existence, that could not otherwise happen. Stopping the AI would remove all hope of it ever happening.
So I was banned from commenting on LessWrong . . .
My whole life I’ve been ranting about how incomprehensibly evil the world is. Maybe I’m the only one who thinks things shouldn’t be difficult in the way they are.
Evil is things that don’t work, but can’t be avoided. A type of invincible stupidity.
For example, software is almost supernaturally evil. I’ve been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can’t be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.
The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I’m not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was “weird”. Weird does NOT automatically mean wrong!
Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow?
Maybe. Heck PROBABLY. But maybe not.
The fact that it’s so difficult to make even the simplest systems not suck, may mean that much larger systems won’t work either.
In fact, it’s certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
Any question that requires it to remember instructions; like assume mouse means world and then ask it which is bigger, a mouse or a rat.
I believe it should be possible at every Lesswrong post to make “low quality” comments that would be automatically hidden at the bottom of each comment section, underneath the “serious” comments. So you would have to click on them to make them visible. Such comments would be automatically given −100 points, but in a way that doesn’t count against the poster’s “account karma”. The only requirement would be that the commenter should genuinely believe they’re making a true statement. Replies to such comments would be similarly hidden. Also certain types of “unacceptable” speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.
I agree with everything in this post!
The fact that it took eons for global evolution to generate ever larger brains implies there is an unknown factor that makes larger brains inefficient, so any hyper-AI would have to be made up of many cooperating smaller AIs, which would delay its evolution.
One of L. Ron Hubbard’s more controversial writings was:
“The seven-year-old girl who shudders because a man kisses her is not computing; she is reacting to an engram since at seven she should see nothing wrong in a kiss, not even a passionate one.”
Scientologists are not allowed to discuss or analyze Hubbard’s writings, but they seem to think he’s merely describing a supernatural effect.
Others interpret the word “should” to mean Hubbard was a big fat pervert. Either way he could have added a few sentences ruling out the worse interpretation here.
Is that like when Dr. Who said in nine hundred years he’s never met anyone who wasn’t important
Great post! My problem is there usually is no reward except the absence of horrible punishment. Surprisingly many things I have to do involve trying to persuade various entities not to steal as much money from me. The barrier to getting started is often choosing between two options, where one of them is very bad. The other option is worse, but you don’t know which is which.
That was also how Goering killed himself just before he was due to be hanged. He cultivated good relations with his guards, and bribed one to return his cyanide capsule that had been confiscated at his arrest.