Possibly related: I think Yann LeCun is doing an excellent. Job of alerting people to the potential dangers of AI, by presenting conspicuously bad arguments to the effect of, “don’t worry, it’ll be fine.”
Michael Roe
The motivation of the original paper appears to be avoiding copyright infringement lawsuits.
If the model’s Harry Potter knockoff is just barely different enough from the original to avoid getting sued, then the goal is achieved.
The idea of a boarding school story with magic is probably not iriginal enough to be worthy of copyright protection.
I have also noticed that large language models know a lot about Stephanie Meyer’s Twilight.
Vampire story: fine, the genre is well out of copyright
Characters called Bella and Edmund, and the secondary love interest is a werewolf? More problematic.
call them something else, like Christian Grey? Maybe.
(For those who don’t know: Fifty Shades of Grey is a disguised Twilight fanfic).
I think politics often involves bidding for the compromise you think is feasible, rather than what you’ld ideally want.
whats maybe different in the AI risk case, and others like it, is how you’ll be regarded when things go wrong.
hypothetical scenario
An AI does something destructive, on the order of 9/11
Government over-reacts as usual, and cracks down on the AI companies like the US did on Osama bin Laden, or Israel did on Hamas.
you are like, yeah we knew that was going to happen
governmet to you, what the fuck? Why didn’t you tell us?
Re. The question of what happens if a non-us model is dropped on Huggingface … presumably hugging face can report to the government “we have done no tests whatsoever” and be compliant with the regulation? (Oh wait maybe not if the originator of the model has actually done some tests but kept then secret). Or later, if there are mandatory tests, huggingface can just run them and report the results to the government.
So, I am told there are a very large number of people who hear voices, but do not otherwise have any of the seriously disabling symptoms of schizophrenia.
Possibilities that come to mind:
Spectrum condition. Non-pathological voice hearers have same relation to diagnosed schizophrenia as high-functioning posters to LessWrong have to the severe forms of autism.
Maybe not even the same thing at all, lke whatever causes schizophrenia just co-incidentally happens to have a symptom in common with some other, harmless thing.
(in the case of autism, the argument that the mild and severe forms share a causal factor is that parents with the mild form statistically tend to have children with the severe form, suggesting a heritable common factor. I have absolutely no idea if there in comparable evidence for voice-hearing).
but you’re posting on LessWrong, which means you ’re interacting with the Rationalist community, which seems to have way, way more trans people than national average. I mean, look at manifold.love, the community’s take on a dating site, and see how long it takes to encounter a trans person. Or, for that matter, the community talking about AI risk contains a bunch of trans people, so you’re pterry much guaranteed to encounter at least one in any extended discussion of AI risk.
Without having empirical data to back this claim, my guess would be that the autism distribution has a single peak at “no symptoms”. that is, the majority of the population has no symptoms, there are lots of people with mild symptoms, and the severe symptoms are down in the tail of the distribution.
First a bit of background: radio hams have competitions where you win points for correctly receiving a radio message from the other participants. To win at this game, you’ve got to be good at hearing a weak, distant radio station that is almost obscured by the background static.
a phenomenon reported by radio operators—especially when using morse code—is that you copy down a message from the other guy, but are not 100% sure whetherv what you’ve heard is real or just hallucinatory. (You get to find out that it was real when the judge of the contest gives you the points for having correctly transcribed the message you were sent).
So that certainly exists. - but it’s unclear if voice hearing is that type of phenomenon, or something else.
If he was fired for some form of sexual misconduct, we wouldn’t change our views on AI risk. But the betting seems to be that it wasn’t that.
On the other hand, if the reason for his firing was something like he had access to a concerning test result, and was concealing it from the board and the government (illegal as per the executive order) then we’re going to worry about what that test result was, and how bad it is for AI risk.
Worst case: this is a AI preventing itself from being shutdown, by getting the board members sympathetic to itself to fire the board members most likely to shut it down. (The “surely you could just switch it off” argument is lacking in imagination as to how an agi could prevent shutdown). Personally, low probabilty that it’s this option.
A wild (probably wrong) theory: Sam Altman announcing custom gpts was the thing that pushed the board to fire him.
customizable ai → user can override rlhf (maybe, probably) → we are at risk from AIs that have been finetunrd by bad actors
I think the intelligence community ought to be watching AI risk, whether or not they actually are.
e.g. in the UK, the Socialist Workers Party was heavily infiltrated by undercover agents; widwly suspected at the time, subsequently officially confirmed. Now, you way well disagree with their politics, but it’s pretty clear they didn’t amount to a threat. Infiltration should probably have bailed out after quickly Establishing threat level was low.
AI risk, on the other hand … from the point of view of an intelligence agency, there’s uncertainty about whether there’s a real risk or not. Seems worthwhile getting some undercover agents in place to find out what’s going on.
… though, if in 20 years time it has become clear the AI apocalypse has been averted, and the three letter agencies are still running dozens of agents inside 5he AI companies, we could reasonably say their agents have outlived their usefulness.
With current LLMs, the algorithm is fairly small and the information is all in the training set.
This would seem to make foom unlikely, as the AI can’t easily get hold of more training data.
using the existing data more efficiently might be possible, of course.
I am not a Bayesian, so I have philosophical objections to giving probabilities to things that are not a distribution you can sample from.
The survey just assumes that everyone is a Bayesian.
Some examples of bugs that were particularly troublesome on a recent project.
in the MIPS backend for the LLVM conpiler there is one point where it ought to be checking whether the target cpu is 32 bit or 64 bit. Instead, it checks if the MIPS version number is mips64. Problem; there were 64 bit MIPS versions before mips64, e.g, mips4, so the check is wrong. Obvious when you see the line of code, but days of tracing though thousands and thousands of lines of code till you it.
with a particularvversion of freebsd on MIPS, it works fine on single core but the console dies on multi core. The serial line interrupt is routed to one of the cores. On receiving an interupt, the oscdisables the interrupt and puts handling the interrupt on a scheduling queue. when the task is taken off the queue, the interrupt is handoed and then re-enabled. Problem: last step might be scheduled to run on a different core. If that happens, interrupt remains disabled on the core that receives the interrupt, and enabled on a core that never recieves it. Console output dies.
I think I’m going to put a low probability on “within 10 years AIs will be able to disempower humanity”.
i think merely being good at science is in no way sufficient to be able to do that; would require a bunch of additional factors (e.g. power seeking, ability to persuade humans, etc etc)
but on the other hand, I think Yann Lecun is way too complacent when he imagines that intelligent AIs will be just like high IQ humans employed by corporations. At a slight risk of being constraversial, i would suggest that e.g. Hamas members are within the range of human behaviours within the training set that an AI could choose to emulate.
I think this post is a good one, even though personally I’m not hung up on AI doom; I think this area of research is cool and interesting, which is a rather different emotion from fear.
My immediate thought is that Cognitive Behevioral Therapy concepts might relevant here, as it sounds like a member of the family of anxiety disorders that CBT is designed to treat.
And also, given this a group phenomenon rather than a purely individual one, there’s something of the apocalyptic religious cult dynamic going on.
one thing that can be kind of irritating about CBT practionere is they way they tend to focus on the emotion about X rather than whether you think X is likely or a practical problem. And then you notice that our typical English language way of talking about things doesn’t distinguish them well. So .. at least when sp asking to someone who is into cbt, you can temporarily adopt a way of speaking that carefully distinguishes the two.
Watching the poly/mong argument on Twitter, my best guess is that sex has a very different personal meaning for these two groups of people. They certainly seem to be talking past each other.
The trans community has, traditionally, considered there to be a distinction between cross dressers and transsexuals. We might now think that the distinction is blurry and it’s not always obvious which category a particular person belongs to. For example someone might meet most of the expected characteristic of the transsexul type but not have actually had surgery—due to considerations of risk, expense, being on a years long NHS waiting list, etc. etc.
Russell Reid (used to be a psychiatrist in the UK, now retired) used to regard response to hormones as diagnostic. Like, if you’re sexually driven estrogen will typically reduce sex drive and you won’t like it, but transsexuals will actually feel better. (Some people report increased sex drive on estrogen—that there is sometimes an atypical response is interestinot). Still, there’s an idea here that transition is suitable for some people and not others, and this is one of the points where you get to find out which type you are.
(Theres a joke that goes “Q: whats the different between a cross dresser and a transsexual? A: About six months” which may have some truth to it, but is not the official answer to that question).
You sometimes hear an argument like this in conspiracy theory groups. It gos something like this:
“My own pet conspiracy theory is sensible! But all the other conspiracy theories on here, they’re completely stupid! Nobody could possibly believe that! In fact, I think they’re all undercover agents sent by the government to make conspiracy theorists look stupid. Oh, wait, that’s also a conspiracy theory, isn’t it? Yes, I believe that one.”