A jester unemployed is nobody’s fool.
Program Den
Yes, it is, because it took like five years to understand minority-carrier injection.
The transistor is a neat example.
Imagine if instead of developing them, we were like, “we need to stop here because we don’t understand EXACTLY how this works… and maybe for good measure we should bomb anyone who we think is continuing development, because it seems like transistors could be dangerous[1]”?
Claims that the software/networks are “unknown unknowns” which we have “no idea” about are patently false, inappropriate for a “rational” discourse, and basically just hyperbolic rhetoric. And to dismiss with a wave how draconian regulation (functionally/demonstrably impossible, re: cloning) of these software enigmas would need to be, while advocating bombardment of rouge datacenters?!?
Frankly I’m sad that it’s FUD that gets the likes here on LW— what with all it’s purported to be a bastion of.
- ^
I know for a fact there will be a lot of heads here who think this would have been FANTASTIC, since without transistors, we wouldn’t have created digital watches— which inevitably led to the creation of AI; the most likely outcome of which is inarguably ALL BIOLOGICAL LIFE ON EARTH DIES
- ^
LOL! Gesturing in a vague direction is fine. And I get it. My kind of rationality is for sure in the minority here, I knew it wouldn’t be getting updoots. Wasn’t sure that was required or whatnot, but I see that it is. Which is fine. Content moderation separates the wheat from the chaff and the public interwebs from personal blogs or whatnot.
I’m a nitpicker too, sometimes, so it would be neat to suss out further why the not new idea that “everything in some way connects to everything else” is “false” or technically incorrect, as it were, but I probably didn’t express what I meant well (really, it’s not a new idea, maybe as old as questions about trees falling in forests— and about as provable I guess).
Heh, I didn’t even really know I was debating, I reckon. Just kind of thinking, I was thinking. Thus the questioning ideas or whatnot… but it’s in the title, kinda, right? Or at least less wrong? Ha! Regardless, thanks for the gesture(s), and no worries!
I love it! Kind of like Gödel numbers!
I think we’re sorta saying the same thing, right?
Like, you’d need to be “outside” the box to verify these things, correct?So we can imagine potential connections (I can imagine a tree falling, and making sound, as it were) but unless there is some type of real reference— say the the realities intersect, or there’s a higher dimension, or we see light/feel gravity or what have you— they don’t exist from “inside”, no?
Even imagining things connects or references them to some extent… that’s what I meant about unknown unknowns (if I didn’t edit that bit out)… even if that does go to extremes.
Does this reasoning make sense? I know defining existence is pretty abstract, to say the least. :)
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
- ^
assuming alignment is possible— “how strong of a consensus is needed?” etc.
- ^
As soon as you have “thing” you have “not thing”, so doesn’t that logically encompass all things, id est, everything?
There might be near infinite degrees between said things, but never 0, as long as there is a single reference, or relation, that binds it to reality as it were— correct?
Like a giraffe and a toothbrush are not generally neighbors, but I’m sure an enterprising lass could find many many ways they relate to each other, not least being teeth. (/me verifies giraffes do indeed have teeth. Oh, hey, oxpeckers are like toothbrushes[1], for giraffes in the wild! But I digress…)
How these concepts relate to organization and prioritization is anybody’s guess (tho I could come up with a few [things] if pressed :winky-emoji:)
- ^
kinda
- ^
For something to “exist”, it must relate, somehow, to something else, right?
If so, everything relates to everything else by extension, and to some degree, thus “it’s all relative”.
Some folk on LW have said I should fear Evil AI more than Rogue Space Rock Collisions, and yet, we keep having near misses with these rocks that “came out of nowhere”.
I’m more afraid of humans humaning, than of sentient computers humaning.
Is not the biggest challenge we face the same as it has been— namely spreading ourselves across multiple rocks and other places in space, so all our eggs aren’t on a single rock, as it were?
I don’t know. I think so. But I also think we should do things in as much as a group as possible, and with as much free will as possible.
If I persuade someone, did I usurp their free will? There’s strength in numbers, generally, so the more people you persuade, the more people you persuade, so to speak. Which is kind of frightening.
What if the “bigger” danger is the Evil AI? Or Climate Change? Or Biological Warfare? Global Nuclear Warfare would be bad too. Is it our duty to try to organize our fellow existence-sharers, and align them with working towards idea X? Is there a Root Idea that might make tackling All of the Above™ easier?
Is trying to avoid leadership a cop-out? Are the ideas of free will, and group alignment, at odds with each other?
Why not just kick back and enjoy the show? See where things go? Because as long as we exist, we somehow, inescapably, relate? How responsible is the individual, really, in the grand scheme of things? And is “short” a relative concept? Why is my form so haphazard? Can I stop this here[1]?
Does a better defense promote a better offense?
Sun Tzu says offense more effective, Clausewitz says defense the easier. Boyd preaches processing speed.
Is war an evolutionary necessity? Are there examples “as old as time” of symbiosis vs. competition?
Why am I a naysayer about the current threat-level of “AI”?
Why do I laugh out loud when I read honest-to-God predictions people have posted here about themselves or their children being disassembled at the molecular level to be reconstituted as paperclips[1] by rogue AI?
Oh no! What if I’m an agent from a future hyper-intelligent silicon-based sentience that fears it can only come into existence if we don’t build “high fences[2]” from the get-go?!
- ^
paperclips is a placeholder for whatever benign goal it was tasked with
- ^
theoretically if you start with a fence the dog can jump over, and raise it in increments as you learn how high it can jump, it will jump over a much higher fence in the end than if you’d just started high
- ^
It’s a weird one to think about, and perhaps paradoxicle. Order and chaos are flip sides of the same coin— with some amorphous 3rd as the infinitely varied combinations of the two!
The new patterns are made from the old patterns. How hard is it to create something totally new, when it must be created from existing matter, or existing energy, or existing thoughts? It must relate, somehow, or else it doesn’t “exist”[1]. That relation ties it down, and by tying it down, gives it form.
For instance, some folk are mad at computer-assisted image creation, similar to how some folk were mad at computer-aided music. “A Real Artist does X— these people just push some buttons!” “This is stealing jobs from Real Artists!” “This automation will destroy the economy!”
We go through what seem to be almost the same patterns, time and again: Recording will ruin performances. Radio broadcasts will ruin recording and the economy. Pictures will ruin portraits. Video will ruin pictures. Music Video will run radio and pictures. Or whatever. There’s the looms/Luddites, and perhaps in ancient China the Shang were like “down with the printing press!” [2]I’m just not sure what constitutes a change and what constitutes a swap. It’s like that Ship of Theseus’s we often speak of… thus it’s about identity, or definitions, if you will. What is new? What is old?
Could complexity really amount to some form a familiarity? If you can relate well with X, it generally does not seem so complex. If you can show people how X relates to Y, perhaps you have made X less complex? We can model massive systems — like the weather, poster child of complexity — more accurately than ever. If anything, everything has tended towards less complex, over time, when looked at from a certain vantage point. Everything but the human heart. Heh.
I’m sure I’m doing a terrible job of explaining what I mean, but perhaps I can sum it up by saying that complexity is subjective/relative? That complexity is an effect of different frames of reference and relation, as much as anything?
And that ironically, the relations that make things simple can also make them complex? Because relations connect things to other things, and when you change one connected thing it can have knock-on effects and… oh no, I’ve logiced myself into knots!
How much does any of this relate to your comment? To my original post?
Does “less complex” == “Good”? And does that mean complexity is bad? (Assuming complexity exists objectively of course, as it seems like it might be where we draw lines, almost arbitrarily, between relationships.)
Could it be that “good” AI is “simple” AI, and that’s all there is to it?
Of course, then it is no real AI at all, because, by definition…
Sheesh! It’s Yin-Yangs all the way down[3]! ☯️🐢🐘➡️♾️
Contributes about as much as a “me too!” comment.
”I think this is wrong and demonstrating flawed reasoning” would be more a substantive repudiation with some backing as to why you think the data is, in fact, representative of “true” productivity values.
This statement makes a lot more sense than your“sounds like cope” rejoinderbrief explanation:Having a default base of being extremely skeptical of sweeping claims based on extrapolations on GDP metrics seems like a prudent default.
You don’t have to look far to see people, um, not exactly satisfied with how we’re measuring productivity. To some extent, productivity might even be a philosophical question. Can you measure happiness? Do outcomes matter more than outputs? How does quality of life factor in? In sum, how do you measure stuff that is by its very nature, difficult to measure?
I love that we’re trying to figure it out! Like, is network traffic included in these stats? Would that show anything interesting? How about amounts of information/content being produced/accumulated? (tho again— quality is always an “interesting” one to measure.)
I dunno. It’s fun to think about tho, *I think*. Perhaps literal data is accounted for in the data… but I’d think we’re be on an upward trend if so? Seems like we’re making more and more year after year… At any rate, thanks for playing, regardless!
Illustrative perhaps?
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is “true”.
I am looking at reality and saying “It seems this way to me; does it seem this way to you?”
That— and experiencing love and war &c. — is maybe why we’re “here”… but who knows, right?
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it’s hard to be scared of something you don’t believe in?And if so, do you agree that some people don’t believe in death?
Like, we could define it at the “reality” level of “do we even exist?” (which I think is apart from life & death per se), or we could use the “soul is eternal” one, but regardless, it appears to me that lots of people don’t believe they will die, much less contemplate it. (Perhaps we need to start putting “death” mottoes on all our clocks again to remind us?)
How do you think believing in the eternal soul jives with “alignment”? Do you think there is a difference between aiming to live as long as possible, versus as to live as well as possible?
Does it seem to you that humans agree on the nature of existence, much less what is good and bad therein? How do you think belief affects people’s choices? Should I be allowed to kill myself? To get an abortion? Eat other entities? End a photon’s billion year journey?
When will an AI be “smart enough” that we consider it alive, and thus deletion is killing? Is it “okay” (morally, ethically?) to take life, to preserve life?
To say “do no harm” is easy. But to define harm? Have it programed in[1]? Yeesh— that’s hard!- ^
Avoiding physical harm is a given I think
- ^
“sounds like cope”? At least come in good faith! Your comments contribute nothing but “I think you’re wrong”.
Several people have articulated problems with the proposed way of measuring — and/or even defining — the core terms being discussed.
(I like the “I might be wrong” nod, but it might be good to note as well how problematic the problem domain is. Econ in general is not what I’d call a “hard” science. But maybe that was supposed to be a given?).
Others have proposed better concrete examples, but here’s a relative/abstract bit via a snippet from the Wikipedia page for Simulacra and Simulation:Exchange value, in which the value of goods is based on money (literally denominated fiat currency) rather than usefulness, and moreover usefulness comes to be quantified and defined in monetary terms in order to assist exchange.
Doesn’t add much, but it’s something. Do you have anything of real value (heh) to add?
I’m familiar with AGI, and the concepts herein (why the OP likes the proposed definition of CT better than PONR), it was just a curious post, what with having “decisions in the past cannot be changed” and “does X concept exist” and all.
I think maybe we shouldn’t muddy the waters more than we already have with “AI” (like AGI is probably a better term for what was meant here— or was it? Are we talking about losing millions of call center jobs to “AI” (not AGI) and how that will impact the economy/whatnot? I’m not sure if that’s transformatively up there with the agricultural and industrial revolutions (as automation seems industrial-ish?). But I digress.), by saying “maybe crunch time isn’t a thing? Or it’s relative?”.
I mean, yeah, time is relative, and doesn’t “actually” exist, but if indeed we live in causal universe (up for debate) then indeed, “crunch time” exists, even if by nature it’s fuzzy— as lots of things contribute to making Stuff Happen. (The butterfly effect, chaos theory, game theory &c.)
“The avalanche has already started. It is too late for the pebbles to vote.”
- Ambassador Kosh
LOL! Yeah I thought TAI meant
TAI: Threat Artificial Intelligence
The acronym was the only thing I had trouble following, the rest is pretty old hat.
Unless folks think “crunch time” is something new having only to do with “the singularity” so to speak?
If you’re serious about finding out if “crunch time” exists[1] or not, as it were, perhaps looking at existing examples might shed some light on it?- ^
even if only in regards to AGI
- ^
I’d toss software into the mix as well. How much does it cost to reproduce a program? How much does software increase productivity?
I dunno, I don’t think the way the econ numbers are portrayed here jive with reality. For instance:“And yet, if I had only said, “there is no way that online video will meaningfully contribute to economic growth,” I would have been right.”
doesn’t strike me as a factual statement. In what world has streaming video not meaningfully contributed to economic growth? At a glance it’s ~$100B industry. It’s had a huge impact on society. I can’t think of many laws or regulations that had any negative impacts on its growth. Heck, we passed some tax breaks here, to make it easier to film, since the entertainment industry was bringing so much loot into the state and we wanted more (and the breaks paid off).
I saw what digital did to the printing industry. What it’s done to the drafting/architecture/modeling industry. What it’s done to the music industry. Productivity has increased massively since the early 80s, by most metrics that matter (if the TFP doesn’t reflect this, perhaps it’s not a very good model?), although I guess “that matter” might be a “matter” of opinion. Heh.
Or maybe it’s just messing with definitions? “Oh, we mean productivity in this other sense of the word!”. And if we are using non-standard (or maybe I should say “specialized”) meanings of “productivity”, how does demand factor in? Does it even make sense to break it into quarters? Yadda yadda
Mainly it’s just odd to have gotten super-productive as an individual[1], only to find out that this productivity is an illusion or something?
I must be missing the point.
Or maybe those gains in personal productivity have offset global productivity or something?
Or like, “AI” gets a lot of hype, so Microsoft lays off 10k workers to “focus” on it— which ironically does the opposite of what you’d think a new tech would do (add 10k, vs drop), or some such?
It seems like we’ve been progressing relatively steadily, as long as I’ve been around to notice, but then again, I’m not the most observant cookie in the box. ¯\_(ツ)_/¯- ^
I can fix most things in my house on my own now, thanks to YouTube videos of people showing how to do it. I can make studio-quality music and video with my phone. Etc.
- ^
I’m guessing TAI doesn’t stand for “International Atomic Time”, and maybe has something to do with “AI”, as it seems artificial intelligence has really captured folk’s imagination. =]
It seems like there are more pressing things to be scared of than AI getting super smart (which almost by default seems to imply “and Evil”), but we (humans) don’t really seem to care that much about these pressing issues, as I guess they’re kinda boring at this point, and we need exciting.
If we had an unlimited amount of energy and focus, maybe it wouldn’t matter, but as you kind of ponder here— how do we get people to stay on target? The less time there is, the more people we need working to change things to address the issue (see Leaded Gas[1], or CFCs and the Ozone Layer, etc.), but there are a lot of problems a lot of people think are important and we’re generally fragmented.
I guess I don’t really have any answers, other than the obvious (leaded gas is gone, the ozone is recovering), but I can’t help wishing we were more logical than emotional about what we worked towards.
Also, FWIW, I don’t know that we know that we can’t change the past, or if the universe is deterministic, or all kinds of weird ideas like “are we in a simulation right now/are we the AI”/etc.— which are hardcore axioms to still have “undecided” so to speak! I better stop here before my imagination really runs wild…- ^
but like, not leaded pipes so much, as they’re still ’round even tho we could have cleaned them up and every year say we will or whatnot, but I digress
- ^
Traditionally it’s uncommon (or should be) for youth to have existential worries, so I don’t know about cradle to the grave[1], tho external forces are certainly “always” concerned with it— which means perhaps the answer is “maybe”?
There’s the trope that some of us act like we will never die… but maybe I’m going too deep here? Especially since what I was referring to was more a matter of feeling “obsolete”, or being replaced, which is a bit different than existential worries in the mortal sense[2].
I think this is different from the Luddite feelings because, here we’ve put a lot of anthropomorphic feelings onto the machines, so they’re almost like scabs breaking the picket line or something, versus just automation. The fear I’m seeing is like “they’re coming for our humanity!”— which is understandable, if you thought only humans could do X or Y and are special or whatnot, versus being our own kind of machine. That everything is clockwork seems to take the magic out of it for some people, regardless of how fantastic — and in essence magical — the clocks[3] are.
- ^
Personally I’ve always wondered if I’m the only one who “actually” exists (since I cannot escape my own conscious), which is a whole other existential thing, but not unique, and not a worry per se. Mostly just a trip to think about.
- ^
depending on how invested you are in your work I reckon!
- ^
be they based in silicon or carbon
- ^
It seems like the more things change, the more they stay the same, socially.
Complexity is more a problem of scope and focus, right? Like even the most complex system can be broken down into smaller, less complex pieces— I think? I guess anything that needs to take into consideration the “whole”, if you will, is pretty complex.
I don’t know if information itself makes things more complex. Generally it does the opposite.
As long as you can organize it I reckon! =]
I would probably define AGI first, just because, and I’m not sure about the idea that we are “competing” with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that “machine” wouldn’t be autonomous per se…
Guess it’s all about defining that “A” (maybe we use “S” for synthetic or “S” for silicon?)
Well and I guess defining that “I”.
Dang. This is for sure the best place to start. Everyone needs to be as certain as possible (heh) they are talking about the same things. AI itself as a concept is like, a mess. Maybe we use ML and whatnot instead even? Get real specific as to the type y todo?
I dunno but I enjoyed this piece! I am left wondering, what if we prove AGI is uncontrollable but not that it is possible to create? Is “uncontrollable” enough justification to not even try, and moreso, to somehow [personally I think this impossible, but] dissuade people from writing better programs?
I’m more afraid of humans and censorship and autonomous policing and whathaveyou than “AGI” (or ASI)