Writing correct code given a specification is the relatively easy part of software engineering. The hard part is deciding what you need that code to actually do. In other words, “requirements”—your dumbass customer has only the vaguest idea what they want and contradicts themselves half the time, but they still expect you to read their mind and give them something they’re going to be happy with.
CronoDAS
Possibly one of the only viable responses to a hostile AI breakout onto the general Internet would be to detonate several nuclear weapons in space, causing huge EMP blasts that would fry most of the world’s power grid and electronic infrastructure, taking the world back to the 1850s until it can be repaired. (Possible AI control measure: make sure that “critical” computing and power infrastructure is not hardened against EMP attack, just in case humanity ever does find itself needing to “pull the plug” on the entire goddamn world.)
Hopefully whichever of Russia, China, and the United States didn’t launch the nukes would be understanding. It might make sense for the diplomats to get this kind of thing straightened out before we get closer to the point where someone might actually have to do it.
(This is because eventually they do lose an election, and then they do fight a civil war. For example, the American South fought a civil war rather than allow Lincoln to become their President.)
I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn’t know very much about current AI, and isn’t interested enough to talk to strangers online—he’s in his 70s and figures that if AI does eventually destroy the world it probably won’t be in his own lifetime. :/
Representative democracy can only last so long as people prefer losing an election to fighting a civil war.
He might also argue “even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars.”
Because there were enough people selling for prices lower than $40 to satisfy the demand for greater fools?
Also, stocks can be sold short if the price goes too high.
Yes, I know.
My father thinks that ASI is going to be impractical to achieve with silicon CMOS chips because Moore’s law is eventually going to hit fundamental limits—such as the thickness of individual atoms—and the hardware required to create it would end up “requiring a supercomputer the size of the Empire State Building and consume as much electricity as all of New York City”.Needless to say, he has very long timelines for generally superhuman AGI. He doesn’t rule out that another computing technology could replace silicon CMOS, he just doesn’t think it would be practical unless that happens.My father is usually a very smart and rational person (he is a retired professor of electrical engineering) and he loves arguing, and I suspect that he is seriously overestimating the computing hardware it would take to match a human brain. Would anyone here be interested in talking to him about it? Let me know and I’ll put you in touch.Update: My father later backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn’t know very much about current AI, and isn’t interested enough to talk to strangers online—he’s in his 70s and if AI does eventually destroy the world it probably won’t be in his own lifetime. :/
If it’s 1950, is having gay sex unwholesome?
(yes I know you’ve being ironic)
Well, trade does have a more zero-sum character when both sides of the trade have the same preferences, but if you can credibly claim to have different preferences, you’re also in a better position to convince the person on the other side of the trade that you’re not trying to offer them a bad deal. (For example, if you’re selling stock because you want to spend the money, you don’t care if you disagree with someone about what the stock will be worth in the future; you just want to sell it for the best offer you can get right now.)
I think you missed the point of the Laffy Taffy example. He got the flavor he didn’t like because he’d been systematically eating the ones he did like while leaving the flavor he didn’t like in the bowl. (Or his friend wasn’t actually picking at random.)
I imagine a criminal defense attorney gets lied to more than Dr. House.
Out of context, I could totally believe someone would use that name for a chat room as a joke. Then again, I’m the kind of guy who can barely keep himself from offering to tell bomb jokes to airport security.
Ah. It turns out that I was mistaken in thinking that the 5th Amendment guaranteed the right to refuse to testify against one’s spouse; the text of the amendment doesn’t mention spouses at all. (Mandela effect strikes again?)
That’s surprising and I think I must be missing some context. Random Googling seems to suggest that spousal testimonial privilege applies to events before the marriage, at least in federal court—if he did legally marry his girlfriend, she shouldn’t have had to testify at his trial if she didn’t want to. Different states do treat spousal privilege differently, though, but am I missing something else? Did the police learn something from the girlfriend before the marriage that the prosecution can use in court without having her testify?
What it tends to boil down to is that they don’t trust me to be their criminal co-conspirator
Yeah, as a certain TV character said, they don’t want a criminal lawyer, they want a criminal lawyer.
I’ve heard the opposite—a dead shooting victim means there aren’t any witnesses to contradict your story.
What he said. Analyzing politically volatile data and determining that it’s clearly made up feels on-brand for LessWrong regardless of what one thinks about the underlying issues...