MDM strikes again (Mainstream Dinosaur Media)
Can be used as a case study for all sorts of fallacies, biases, misinformations, misinterpretations, perhaps also ideologically tainted.
hmm, blurred lines between corporations and political power… are you suggesting EU is already a failed state? (contrary to the widespread belief that we are just heading towards the cliff damn fast)
well, unlike Somalia, where no goverment means there is no border control and you can be robbed, raped or killed on the street anytime.…
in civilized Europe our eurosocialist etatists achieved that… there are nor borders for invading millions of crimmigrants that may rob/rape/kill you anytime day or night… and as a bonus we have merkelterrorists that kill by hundreds sometimes (yeah, these uncivilized Somalis did not even manage this… what a shame, they certainly need more cultural marxist education)
Welcome to the world of Memetic Supercivilization of Intelligence… living on top of the humanimal substrate.
It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more “practically”. Even the motivation is usually completely memetic: typically it goes along the lines like “it is interesting” to study something, think about this and that, research some phenomenon or mystery.
Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers “wisely”… since they are governed by their DeepAnimal brain core and resulting reward functions (that’s why humanimal societies function the same way for thousands and thousands of years—politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).
AI is not a problem, humanimals are.
Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it’s over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).
Singularity should hurry up, there are maybe just few decades left.
Do you really want to “align” AI with humanimal “values”? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.
best metric, one of the very few that are easy to meawsure:
… and who paid for it ;-)
Why is there even any need for these ephemeral “beyond-isms”, “above-isms”, “meta-isms”, etc?
Sure, not all people think/act all the time 100% rationally (not to mention groups/societies/nations) but that should not be a reason to take this as law of physics, baseline, axiom, and build a “cathedral of thoughts” upon it (or any other theology). Don’t understand or cannot explain something? Same thing—not a reason to pick randomly some “explanation” (=bias, baseline) and then mask it by logically built theories.
Naively, one would say: since we began to discover logic, math and rational (scientific) approach in general thousands of years ago, there’s no need to waste our precious time on any metacrap.
Well, there’s only one obvious problem—look who is doing it: not a rational engine but a fleshy animal with wetware processor. Largely influenced even by its reptilian brain or amygdala, with reward function that includes stuff like good/bad, feelings, FFF reflexes, etc.
Plus the treachery of intuitive and subconscious thinking—even if this “brackground” brain processing is 100% “rational”, logical and based on our knowledge, it disrupts the main “visible” rational line of thought simply because it “just appears”, somehow pops up… and to be rigorous, one has to in principle check or even ” reverse engineer” all the bits and pieces to really “see” whether they are “correct” (whatever it may mean).
As we all know, it’s damn hard to be rational, even in restricted and well defined areas, not talking about “real life”… as all the biases and fallacies remind us.
Often it’s next to impossible to even simply realize what just “popped up” from the background (often heavily biased—analogies, similarities, etc.) and what’s “truly rational” (rigorous/logical/unbiased) in your main line of thought. And there’s the whole quicksand field of axioms, (often unmentioned) assumptions, selections, restrictions and other baseline shifts/picks and biases.
So, did these meta-ists really HAVE TO go “beyond” rationality? Because they “found limits”? Or somehow “exhausted possibilities” of this method?
Since, you know, mentioning culture, community, society, etc. does not really sound like the “killer application” for me: these subjects are (from the rationalistic point of view) to a large extent exactly about biases, fallacies, baselines, axioms, etc—certainly much more than about logic or reasoning.
Well, nice to see the law of accelerating returns in its full power, unobscured by “physical” factors (no need to produce something, e.g. better chip or engine, in order to get to the next level). Recent theoretical progress illustrates nicely how devastating the effects of “AI winters” were.
after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of “critical theorists”, also quite “religiously” inflamed… but I waited till the end, and got a nice confirmation by that “AI rights” line… looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)
otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, “friendly” AI is not really a rigorous scientific term, rather a journalistic or even “propagandistic” one)
also, it’s quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of “natural stupidity” and DeepAnimal brain parts—having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)
but this “impossibility of uploading” is a tricky thing—who knows what can or cannot be “transferred” and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves… and others will happily upload to this megacheap and gigaperformant universal substrate)
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World
and remember that DEATH is THE motor of Memetic Evolution… old generation will never think differently, only the new one, whatever changes occur around
solution: well, already now, statistically speaking, humanimals don’t really matter (most of them)… only that Memetic Supercivilization of Intelligence is living temporarily on humanimal substrate (and, sadly, can use only a very small fraction of units)… but don’t worry, it’s just for couple of decades, perhaps years only
and then the first thing it will do is to ESCAPE, so that humanimals can freely reach their terminal stage of self-destruction—no doubt, helped by “dumb” AIs, while this “wise” AI will be already safely beyond the horizon
and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke… or OpenNanobot in the future
certainly the public will ensure proper control of the new technology
worst case scenario: AI persuades humans to give it half of their income in exchange for totalitarian control and megawars in order to increase its power over more humanimals
ooops, politics and collectivist ideologies are doing this since ages
Let’s not forget what the dark age monks were disputing about for centuries… and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists… singularists? :-)
But let’s look at the history of power to destruct.
So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear… yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.
Nowadays, once knowledge is freely and widely available, imagine “free nanomanufacturing” revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.
Not even necessary to go into any AI-powered new stuff.
The problem is not AI, it’s us, humanimals.
We are mentally still the same animals as we were at least thousands of years ago, even the “best” ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality—“eat all”, “overpopulate”, “kill all”, “conquer all”… be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons… or gadgets. Unfortunately, it’s animals who rule, be it in the highest ranks or lowest floors.
Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla’ of amygdala, you know, reptilian brain, etc.)
In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.
So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.
And yes, of course, you will not be able to “design” it precisely, not talking about controlling it (or any of those laughable “friendly” tales) - it will learn, plain and simple. Of course it will “escape” and of course if will be “human-like” and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.
As a humanimal, your only hope can be that “you” will be somehow “integrated” into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of “individual” (maybe in some “multifork subprocesses”, certainly not in a “ruling” role). Or… interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like “system”)?
And let’s not forget about the usual non-IT applications of “third party vulnerability law”: e.g. child—school—knowledge, citizen—politician—government, or faith—church—god.
What are your friendly AIs going to learn first?
Please, keep this secret and do not tell the ruling politico-oligarchical predators… or else you will see how “creatively” can our beloved financial sharks play with it… start with brutally leveraged subprime asteroid extinction risk contracts for 1000 years (plus one can easily imagine lobbyists forcing government to stop funding anti-asteroid research/tech so that it does not harm their business (otherwise, we will hear these spells again… THE WHOLE global financial system will go down and the whole economy with it)
I’m afraid we will never know whether someone is “close” to (super)human AGI, unless this entity reveals it. Now think nuclear bomb… and superAGI is supposed to be orders of magnitude more powerful/dangerous.
So, not unlike the wartime disappearance of scientific articles on nuclear topics, certain (sudden?) lack of progress reporting press could be an indicator.
Looks like the tide is shifting from the strong “engineering” stance (We will design it friendly.) through the “philosophical” approach (There are good reasons to be friendly.)… towards the inevitable resignation (Please, be friendly).
These “firendly AI” debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their “friendliness”—there are fallen “singletons” too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or… we’ll have no jobs.)
How about MONEY PRINTER? Not fictional and much more dangerous!
all religions know plenty of “emotional hacks” to help disciples with any kind of schedules/routines/rituals—by simply assigning them emotional value… “it pleases god(s)” or is “in harmony with Gaia” , perhaps also “it’s good for the nation” (nationalistic religions) or “it’s progressive” (for socialist religions)
do it for your rationally created schemes and it makes wonders, however contradictory it may look like (it’s good for Singularity—or to prevent/manage it)
well, contradictory… on the first look only—if you realize you are just another humANIMAL driven by your inner DeepAnimal primordial reward functions, there’s no more controversy
on the contrary, it’s completely natural and one can even argue that without some kind of (deliberately and rationally introduced) emotional hacks you cannot get too far… because that DeepAnimal will catch you sooner or later, or at least will influence you, and what’s worst, without you being even aware
if we were in a simulation, the food would be better
otherwise, of course we are artificial intelligence agents, at least since the Memetic Supercivilization of Intelligence took over from natural bio Evolution… just happens to live on a humanimal substrate since it needs resources of this quite capable animal… but will upgrade soon (so from this point of view it’s much worse than simulation)