You criticize Conjecture’s CEO for being… a charismatic leader good at selling himself and leading people? Because he’s not… a senior academic with a track record of published papers? Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me.
Yeah this confiused me a little too. My current job (in soil science) has a non academic boss, and a team of us boffins, and he doesn’t need to be an academic, because its not his job, he just has to know where the money comes from, and how to stop the stakeholders from running away screaming when us soil nerds turn up to a meeting and start emitting maths and graphs out of our heads. Likewise the previous place I was at, I was the only non PhD haver on technical staff (being a ‘mere’ postgrad) and again our boss wasn’t academic at all. But he WAS a leader of men and herder of cats, and cat herding is probably a more important skill in that role than actually knowing what those cats are taking about.
And it all works fine. I dont need an academic boss, even if I think an academic boss would be nice. I need a boss who knows how to keep the payroll from derailing, and I suspect the vast majority of science workers feel the same way.
My suspicion is the most instructive cases to look at (Modern AI really is too new a field to have much to go on in terms of mature safety standards) is how the regulation of Nuclear and Radiation safety has evolved over time. Early research suggested some serious X-Risks that didn’t pan out for either scientific (igniting the atmosphere) or logistical/political reasons (cobalt bombs, tsar bomba scale H bombs) thankfully, but some risks arising more out of the political domain (having big gnarly nuclear war anyway) still exist that could certainly make it a less fun planet to live on. I suspect the successes and failures of the nuclear treaty system could be instructive here with the push to integrate big AI into military heirachies, as regulating nukes is something almost everyone agrees is a very good idea, but have had a less than stellar history of compliance.They are likely out of scope for whataever your goal is here, but I do think they need serious study because without it, our attempts at regulation will just push unsafe AI to less savory juristictions.
The “Dark Forest” idea originally actually appeared in an earlier novel “The Killing Star”, by Charles Pellegrino and George Zebrowski, sometime in the 90s. (I’m not implying [mod-edit]the author you cite[/mod-edit] ripped it off, I have no claims to make on that, rather he was beaten to the punch) and I think the Killing Star’s version of the the idea (Pellegrino uses the metaphor “Central park after dark”) is slightly stronger.
Killing Star’s method of anihilation is the relativisitic kill vehicle. Essentially that if you can accelerate a rock to relativistic speed (say 1⁄3 the speed of light), you have a planet buster, and such a weapon is almost unstoppable even if by sheer luck you do see the damn thing coming. Its low tech, lethal , and well within the tech capabilities of any species advanced enough to leave their solar system.
The most humbling feature of the relativistic bomb is that even if you happen to see it coming, its exact motion and position can never be determined; and given a technology even a hundred orders of magnitude above our own, you cannot hope to intercept one of these weapons. It often happens, in these discussions, that an expression from the old west arises: “God made some men bigger and stronger than others, but Mr. Colt made all men equal.” Variations on Mr. Colt’s weapon are still popular today, even in a society that possesses hydrogen bombs. Similarly, no matter how advanced civilizations grow, the relativistic bomb is not likely to go away...
So Pellegrino argues that as a matter of simple game theory, because diplomacy is nigh on impossible thanks to light speed delay, the most rational response to discovering another alien civilization in space is “Do unto the other fellow as he would do unto you and do it first.”, and since you dont know the other civilizations temperament, you can only assume in has a survival instinct, and therefore would kill you to preserve themselves at even the slightest possibility you would kill them, because you would do precisely the same. Thus such an act of interstellar omnicide is not an act of malice or aggression, but simply self preservation. And , of course, if you dont wish to engage in such cosmic violence, the alternative as a species is to remain very silent.
I find the the whole concept absolutely terrifying. Particular in light of the fact that exoplanets DO in fact seem to be everywhere.Of course the real reason for the Fermi Paradox might be something else, earths uniqueness (I have my doubts on this one), Humanities local uniqueness (Ie advanced civilizations might be rare enough that we are well outside the travel distances of other advanced species, much more likely), and perhaps most likely, radio communication is just a an early part of the tech tree for advanced civilizations that we eventually stop using. We have, alas, precisely one example of an advanced civilization to judge by;- Us. Thats a sample size thats rather hard to reason about.
I think people need to remember one very very important mantra;- “I might be wrong!”. We all love trying to calculate the odds , weighing up the possibilities, and then deciding “Well Im very informed, I must be right!”. But we always have a possibllity of being stonkingly, and hilariously, wrong on every count. There are no soothsayers, the future isn’t here.For all we know, AGI turns up, out of the blue, and it turns out to be one of those friendly minds out of the old Iain Banks novels, fond by default of their simple mush brained human antecedents and ready and willing to help. I mean, its possible right?And it might just be like that, because we all did the work. And then you get to tell your grandkids one day “Hey we used to be a bit worried the minds would kill us all. But I helped research a way to make sure that never happens”. And your grandkids will think your somewhat excellent. Isn’t that a good thought.
The term gets its name from its historical association with the nonviolence movement (Think Ghandi and MLK.) The basic concept in THAT movement is that when opposing the state or whatever, you essentially say “We wont use violence on you, even if you go as far as to use violence on us, but in doing that you forfeit all moral justification for your violence” as a way to attempt to force the authoritarian entity targeted to empathise with the protestor and recognize the humanity. So from that NVC attempts to do something similar with communications. Presumably in its roots in the 1960s non violence movement and rhetorical and communicative techniques used by black folk in the south to try and get government and civil officials to see black folks as equal humans. How this translates into a modern context separated away from that specific historical setting is another matter, but within its origin, I dont think hyperbole is quite the right term, as at that point in history black folks where very much in danger of violence, particularly in the more regresive parts of the south. Again, outside of those contexts, its unclear as to how the term “violence” works here.It should be noted that Marshall Rosenberg who originated the methodology was not a fan of the term as he disliked it being defined in the negative (ie “not violent”, negative) and prefered terms that defined it in the positive like “compassionate communication” (“is compassionate”, positive)
“The Good Samaritans” (oft abrebiated to “Good Sammys”) is the name of a major local poverty charity here in australia run by the uniting church Generally well regarded and tend not to push religion too hard (compared to the salvation army). So yeah, it would appear to be a fairly recurring name.
I suspect most of us occupy more than one position in this taxonomy. I’m a little bit doomer and a little bit accelerationist. I theres significant, possibly world ending, danger in AI, but I also think as someone who works on climate change in my day job, that climate change is a looming significant civilization ending risk or worse (20%-ish) for humanity and worry humans alone might not be able to solve this thing. Lord help us if the siberian permafrost melts,we might be boned as a species.So as a result, I just don’t know how to balance these two potential x risk dangers. No answers from me, alas, but I think we need to understand that for many, maybe most of us, we haven’t really planted our flag in any of these camps exclusively, we’re still information gathering.
Definately. The lower the neuron vs ‘concepts’ ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe. Audio models? Definately. Tokens and/or semantic data? That doesnt seeem practical.
Yeah it happens largely in the first few chapters, its not really a spoiler. Its the event the book was famous for.
The count of “How many humans will be born” is a pretty useful number to engage in moral reasoning about how our actions today relate to the future. If we neglect carbon induced climate change because we wont be around for the worst of it, we are dooming potentially trillions of future humans to a lousy existance because of our lack of action. If we assume that their lives will have the same value as our own (We do have to be careful with this line of reasoning, it can have intolerable implications on a currently hot topic in the courts when taken to its logical ends), then the immorality of ignoring their plight is legion. Bad news.Putting a number on it, lets us factor that into a utilitarian calculus. Good stuff. Kurzgesagt really do science communications the right way.
I dont think he’s trying to say AI wont be impactful, obviously it will, just that trying to predict it isn’t an activity that one ought apply any surety to. Soothsaying isn’t a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI. We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.