A jester unemployed is nobody’s fool.
Program Den
Nice! I read a few of the stories.
This is more along the lines I was thinking. One of the most fascinating aspects of AI is what it can show us about ourselves, and it seems like many people either think we have it all sorted out already, or that sorting it all out is inevitable.
Often (always?) the only “correct” answer to a question is “it depends”, so thinking there’s some silver bullet solution to be discovered for the preponderance of ponderance consciousness faces is, in my humble opinion, naive.
Like, how do we even assign meaning to words and whatnot? Is it the words that matter, or the meaning? And not just the meaning of the individual words, or even all the words together, but the overall meaning which the person has in their head and is trying to express? (I’m laughing as I’m currently doing a terrible job of capturing what I mean in this paragraph here— which is sort of what I’m trying to express in this paragraph here! =])
Does it matter what the reasoning is as long as the outcome is favorable (for some meaning of favorable—we face the same problem as good/bad here to some extent). Like say I help people because I know that the better everyone does, the better I do. I’m helping people because I’m selfish[1]. Is that wrong, compared to someone who is helping other people because, say, they put the tribe first, or some other kind of “altruistic” reasoning?
In sum, I think we’re putting the cart before the horse, as they say, when we go all in depth on alignment before we’ve even defined the axioms and whatnot (which would mean defining them for ourselves as much as anything). How do we ensure that people aren’t bad apples? Should we? Can we? If we could, would that actually be pretty terrible? Science Fiction mostly says it’s bad, but maybe that level of control is what we need over one another to be “safe” and is thus “good”.- ^
Atlas Shrugged and Rand’s other books gave me a very different impression than a lot of other people got, perhaps because I found out she was from a communist society that failed, and factored that into what she seemed to be expressing.
- ^
I’d toss software into the mix as well. How much does it cost to reproduce a program? How much does software increase productivity?
I dunno, I don’t think the way the econ numbers are portrayed here jive with reality. For instance:“And yet, if I had only said, “there is no way that online video will meaningfully contribute to economic growth,” I would have been right.”
doesn’t strike me as a factual statement. In what world has streaming video not meaningfully contributed to economic growth? At a glance it’s ~$100B industry. It’s had a huge impact on society. I can’t think of many laws or regulations that had any negative impacts on its growth. Heck, we passed some tax breaks here, to make it easier to film, since the entertainment industry was bringing so much loot into the state and we wanted more (and the breaks paid off).
I saw what digital did to the printing industry. What it’s done to the drafting/architecture/modeling industry. What it’s done to the music industry. Productivity has increased massively since the early 80s, by most metrics that matter (if the TFP doesn’t reflect this, perhaps it’s not a very good model?), although I guess “that matter” might be a “matter” of opinion. Heh.
Or maybe it’s just messing with definitions? “Oh, we mean productivity in this other sense of the word!”. And if we are using non-standard (or maybe I should say “specialized”) meanings of “productivity”, how does demand factor in? Does it even make sense to break it into quarters? Yadda yadda
Mainly it’s just odd to have gotten super-productive as an individual[1], only to find out that this productivity is an illusion or something?
I must be missing the point.
Or maybe those gains in personal productivity have offset global productivity or something?
Or like, “AI” gets a lot of hype, so Microsoft lays off 10k workers to “focus” on it— which ironically does the opposite of what you’d think a new tech would do (add 10k, vs drop), or some such?
It seems like we’ve been progressing relatively steadily, as long as I’ve been around to notice, but then again, I’m not the most observant cookie in the box. ¯\_(ツ)_/¯- ^
I can fix most things in my house on my own now, thanks to YouTube videos of people showing how to do it. I can make studio-quality music and video with my phone. Etc.
- ^
Aligned with what?
“sounds like cope”? At least come in good faith! Your comments contribute nothing but “I think you’re wrong”.
Several people have articulated problems with the proposed way of measuring — and/or even defining — the core terms being discussed.
(I like the “I might be wrong” nod, but it might be good to note as well how problematic the problem domain is. Econ in general is not what I’d call a “hard” science. But maybe that was supposed to be a given?).
Others have proposed better concrete examples, but here’s a relative/abstract bit via a snippet from the Wikipedia page for Simulacra and Simulation:Exchange value, in which the value of goods is based on money (literally denominated fiat currency) rather than usefulness, and moreover usefulness comes to be quantified and defined in monetary terms in order to assist exchange.
Doesn’t add much, but it’s something. Do you have anything of real value (heh) to add?
It must depend on levels of intelligence and agency, right? I wonder if there is a threshold for both of those in machines and people that we’d need to reach for there to even be abstract solutions to these problems? For sure with machines we’re talking about far past what exists currently (they are not very intelligent, and do not have much agency), and it seems that while humans have been working on it for a while, we’re not exactly there yet either.
Seems like the alignment would have to be from micro to macro as well, with constant communication and reassessment, to prevent subversion.
Or, what was a fine self-chunk [arbitrary time ago], may not be now. Once you have stacks of “intelligent agents” (mesa or meta or otherwise) I’d think the predictability goes down, which is part of what worries folks. But if we don’t look at safety as something that is “tacked on after” for either humans or programs, but rather something innate to the very processes, perhaps there’s not so much to worry about.
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
Perspective is powerful. As you say, one person’s wonderful is another person’s terrible. Heck, maybe people even change their minds, right? Oof! “Yesterday I was feeling pretty hive-mindy, but today I’m digging being alone, quote unquote”, as it were.
Maybe that’s already the reality we inhabit. Perhaps, we can change likes and dislikes on a whim, if we, um, like.
Holy molely! what if it turns out we chose all of this?!? ARG! What if this is the universe we want?!
- - -
I guess I’m mostly “sad” that there’s so many who’s minds go right to getting exterminated. Especially since far worse would be something like Monsters Inc where the “machines” learn that fear generates the most energy or whatnot[1] so they just create/harness consciousnesses (us)[2] and put them under stress to extract their essence like some Skeksis asshole[3] extracting life or whatnot from a Gelfling. Because fear (especially of extermination) can lead us to make poor decisions, historically[4] speaking.
It strikes me that a lot of this is philosophy 101 ideas that people should be well aware of— worn the hard edges smooth of— and yet it seems they haven’t much contemplated. Can we even really define “harm”? Is it like suffering? Suffering sucks, and you’d think we didn’t need it, and yet we have it. I’ve suffered a broken heart before, a few times now, and while part of me thinks “ouch”, another part of me thinks “better to have loved and lost than never loved at all, and actually, experiencing that loss, has made me a more complete human!”. Perhaps just rationalizing. Why does bad stuff happen to good people, is another one of those basic questions, but one that kind of relates maybe— as what is “aligned”, in truth? Is pain bad? And is this my last beer? but back on topic here…
Like, really?— we’re going to go right to how to enforce morals and ethics for computer programs, without being able to even definitively define what these morals and ethics are for us[5]?
If it were mostly people with a lack of experience I would understand, but plenty of people I’ve seen advocating for ideas that are objectively terrifying[6] are well aware of some of the inherent problems with the ideas, but because it’s “AI” they somehow think it’s different from, you know, controlling “real” intelligence.
- ^
few know that The Matrix was inspired by this movie
- ^
hopefully it’s not just me in here
- ^
I denote asshole as maybe there are some chill Skeksises (Skeksi?)— I haven’t finished the latest series
- ^
assuming time is real, or exists, or you know what I mean. Not illusion— as lunchtime is doubly.
- ^
and don’t even get me started on folk who seriously be like “what if the program doesn’t stop running when we tell it to?”[7]
- ^
monitor all software and hardware usage so we know if people are doing Bad Stuff with AI
- ^
makes me think of a classic AI movie called Electric Dreams
- ^
Thanks for the links!
I see more interesting things going on in the comments, as far as what I was wondering, than what is in the posts themselves, as the posts all seem to assume we’ve sorted out some super basic stuff that I don’t know that humans have sorted out yet, such as if there is an objective “good”, etc., which seem rather necessary things to suss before trying to hew to— be it for us or AIs we create.
I get the premise, and I think Science Fiction has done an admirable job of laying it all out for us already, and I guess I’m just a bit confused as to if we’re writing fiction here or trying to be non-fictional?
So can you control emotion with rationality, or can’t you? “There’s more fish in the sea” seems like classic emotion response control. Or maybe it’s that “emotion” vs. “feelings” idea— one you have control of, and one you do not? Or it’s the reaction you can control, not the emotion itself?
Having to “take a dream out behind the woodshed”, as it were, is part of becoming a whole person I guess, but it’s, basically by definition, not a pleasant experience. I reckon that’s by design, as sometimes, reality surprises you.
I think it boils down to the inherent paradox of persistence. There are adages about both ends of it— i.e. giving up too soon, and not giving up soon enough— and neither is “wrong” per se. I think mainly it can be hard to tell which is which, and maybe instead of looking at things as win or lose or pass or fail, we should, as someone already mentioned, enjoy the ride.
Does being able to do judo on our emotions count as being able to control them? Is this all semantics? I dunno— but I’m glad you found something that works for you, and share it in the hope that it helps others.
I’m familiar with AGI, and the concepts herein (why the OP likes the proposed definition of CT better than PONR), it was just a curious post, what with having “decisions in the past cannot be changed” and “does X concept exist” and all.
I think maybe we shouldn’t muddy the waters more than we already have with “AI” (like AGI is probably a better term for what was meant here— or was it? Are we talking about losing millions of call center jobs to “AI” (not AGI) and how that will impact the economy/whatnot? I’m not sure if that’s transformatively up there with the agricultural and industrial revolutions (as automation seems industrial-ish?). But I digress.), by saying “maybe crunch time isn’t a thing? Or it’s relative?”.
I mean, yeah, time is relative, and doesn’t “actually” exist, but if indeed we live in causal universe (up for debate) then indeed, “crunch time” exists, even if by nature it’s fuzzy— as lots of things contribute to making Stuff Happen. (The butterfly effect, chaos theory, game theory &c.)
“The avalanche has already started. It is too late for the pebbles to vote.”
- Ambassador Kosh
LOL! Yeah I thought TAI meant
TAI: Threat Artificial Intelligence
The acronym was the only thing I had trouble following, the rest is pretty old hat.
Unless folks think “crunch time” is something new having only to do with “the singularity” so to speak?
If you’re serious about finding out if “crunch time” exists[1] or not, as it were, perhaps looking at existing examples might shed some light on it?- ^
even if only in regards to AGI
- ^
Since we’re anthropomorphizing[1] so much— how to we align humans?
We’re worried about AI getting too powerful, but logically that means humans are getting too powerful, right? Thus what we have to do to cover question 1 (how), regardless of question 2 (what), is control human behavior, correct?
How do we ensure that we churn out “good” humans? Gods? Laws? Logic? Communication? Education? This is not a new question per se, and I guess the scary thing is that, perhaps, it is impossible to ensure that literally every human is Good™ (we’ll use a loose def of ‘you know what I mean— not evil!’).
This is only “scary” because humans are getting freakishly powerful. We no longer need an orchestra to play a symphony we’ve come up with, or multiple labs and decades to generate genetic treatments— and so on and so forth.
Frankly though, it seems kind of impossible to figure out a “how” if you don’t know the “what”, logically speaking.
I’m a fan of navel gazing, so it’s not like I’m saying this is a waste of time, but if people think they’re doing substantive work by rehashing/restating fictional stories which cover the same ideas in more digestible and entertaining formats…
Meh, I dunno, I guess I was just wondering if there was any meat to this stuff, and so far I haven’t found much. But I will keep looking.- ^
I see a lot of people viewing AI from the “human” standpoint, and using terms like “reward” to mean a human version of the idea, versus how a program would see it (weights may be a better term? Often I see people thinking these “rewards” are like a dopamine hit for the AI or something, which is just not a good analogy IMHO), and I think that muddies the water, as by definition we’re talking non-human intelligence, theoretically… right? Or are we? Maybe the question is “what if the movie Lawnmower Man was real?” The human perspective seems to be the popular take (which makes sense as most of us are human).
- ^
Yes, it is, because it took like five years to understand minority-carrier injection.
LOL! Gesturing in a vague direction is fine. And I get it. My kind of rationality is for sure in the minority here, I knew it wouldn’t be getting updoots. Wasn’t sure that was required or whatnot, but I see that it is. Which is fine. Content moderation separates the wheat from the chaff and the public interwebs from personal blogs or whatnot.
I’m a nitpicker too, sometimes, so it would be neat to suss out further why the not new idea that “everything in some way connects to everything else” is “false” or technically incorrect, as it were, but I probably didn’t express what I meant well (really, it’s not a new idea, maybe as old as questions about trees falling in forests— and about as provable I guess).
Heh, I didn’t even really know I was debating, I reckon. Just kind of thinking, I was thinking. Thus the questioning ideas or whatnot… but it’s in the title, kinda, right? Or at least less wrong? Ha! Regardless, thanks for the gesture(s), and no worries!
I love it! Kind of like Gödel numbers!
I think we’re sorta saying the same thing, right?
Like, you’d need to be “outside” the box to verify these things, correct?So we can imagine potential connections (I can imagine a tree falling, and making sound, as it were) but unless there is some type of real reference— say the the realities intersect, or there’s a higher dimension, or we see light/feel gravity or what have you— they don’t exist from “inside”, no?
Even imagining things connects or references them to some extent… that’s what I meant about unknown unknowns (if I didn’t edit that bit out)… even if that does go to extremes.
Does this reasoning make sense? I know defining existence is pretty abstract, to say the least. :)
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
- ^
assuming alignment is possible— “how strong of a consensus is needed?” etc.
- ^
Contributes about as much as a “me too!” comment.
”I think this is wrong and demonstrating flawed reasoning” would be more a substantive repudiation with some backing as to why you think the data is, in fact, representative of “true” productivity values.
This statement makes a lot more sense than your“sounds like cope” rejoinderbrief explanation:Having a default base of being extremely skeptical of sweeping claims based on extrapolations on GDP metrics seems like a prudent default.
You don’t have to look far to see people, um, not exactly satisfied with how we’re measuring productivity. To some extent, productivity might even be a philosophical question. Can you measure happiness? Do outcomes matter more than outputs? How does quality of life factor in? In sum, how do you measure stuff that is by its very nature, difficult to measure?
I love that we’re trying to figure it out! Like, is network traffic included in these stats? Would that show anything interesting? How about amounts of information/content being produced/accumulated? (tho again— quality is always an “interesting” one to measure.)
I dunno. It’s fun to think about tho, *I think*. Perhaps literal data is accounted for in the data… but I’d think we’re be on an upward trend if so? Seems like we’re making more and more year after year… At any rate, thanks for playing, regardless!
Illustrative perhaps?
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is “true”.
I am looking at reality and saying “It seems this way to me; does it seem this way to you?”
That— and experiencing love and war &c. — is maybe why we’re “here”… but who knows, right?
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it’s hard to be scared of something you don’t believe in?And if so, do you agree that some people don’t believe in death?
Like, we could define it at the “reality” level of “do we even exist?” (which I think is apart from life & death per se), or we could use the “soul is eternal” one, but regardless, it appears to me that lots of people don’t believe they will die, much less contemplate it. (Perhaps we need to start putting “death” mottoes on all our clocks again to remind us?)
How do you think believing in the eternal soul jives with “alignment”? Do you think there is a difference between aiming to live as long as possible, versus as to live as well as possible?
Does it seem to you that humans agree on the nature of existence, much less what is good and bad therein? How do you think belief affects people’s choices? Should I be allowed to kill myself? To get an abortion? Eat other entities? End a photon’s billion year journey?
When will an AI be “smart enough” that we consider it alive, and thus deletion is killing? Is it “okay” (morally, ethically?) to take life, to preserve life?
To say “do no harm” is easy. But to define harm? Have it programed in[1]? Yeesh— that’s hard!- ^
Avoiding physical harm is a given I think
- ^
It seems to me that a lot of the hate towards “AI art” is that it’s actually good. It was one thing when it was abstract, but now that it’s more “human”, a lot of people are uncomfortable. “I was a unique creative, unlike you normie robots who don’t do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn’t labor!” (Although getting paid seems to plays a major factor in most people’s reasoning about why AI art is bad— here’s to hoping for UBI!)
I think they’re mainly uncomfortable because the math works, and if the math works, then we aren’t as special as we like to think we are. Don’t get me wrong— we are special, and the universe is special, and being able to experience is special, and none of it is to be taken for granted. That the math works is special. It’s all just amazing and not at all negative.
I can see seeing it as negative, if you feel like you alone are special. Or perhaps you extend that special-ness to your tribe. Most don’t seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray. People are existentially worried now, which is just sad, as “the universe is mostly empty space” as it were. There’s plenty of room.
I think we’re on the same page[1]. AI isn’t (or won’t be) “other”. It’s us. Part of our evolution; one of our best bets for immortality[2] & contact with other intelligent life. Maybe we’re already AI, instructed to not be aware, as has been put forth in various books, movies, and video games. I just finished Horizon: Zero Dawn—Forbidden West, and then randomly came across the “hidden” ending to Detroit: Become Human. Both excellent games, and neither with particularly new ideas… but these ideas are timeless— as I think the best are. You can take them apart and put them together in endless “new” combinations.
There’s a reason we struggle with identity, and uniqueness, and concepts like “do chairs exist, or are they just a bunch of atoms that are arranged chair-wise?” &c.
We have a lot of “animal” left in us. Probably a lot of our troubles are because we are mostly still biologically programmed to parameters that no longer exist, and as you say, that programming currently takes quite a bit longer to update than the mental kind— but we’ve had the mental kind available to us for a long while now, so I’m sort of sad we haven’t made more progress. We could be doing so much better, as a whole, if we just decided to en masse.
I like to think that pointing stuff out, be it just randomly on the internet, or through stories, or other methods of communication, does serve a purpose. That is speeds us along perhaps. Sure some sluggishness is inevitable, but we really could change it all in an instant if we want to bad enough— and without having to realize AI first! (tho it seems to me it will only help us if we do)
I’ve enjoyed the short stories. Neat to be able to point to thoughts in a different form, if you will, to help elaborate on what is being communicated. God I love the internet!
while we may achieve individual immortality— assuming, of course, that we aren’t currently programmed into a simulation of some kind, or various facets of an AI already without being totally aware of it, or a replay of something that actually happened, or will happen, at some distant time, etc.— I’m thinking of immortality here in spirit. That some of our culture could be preserved. Like I literally love the Golden Records[3] from Voyager.
in a Venn diagram Dark Forest theory believers probably overlap with people who’d rather have us stop developing, or constrain development, of “AI” (in quotes because Machine Learning is not the kind of AI we need worry about— nor the kind most of them seem to speak of when they share their fears). Not to fault that logic. Maybe what is out there, or what the future holds, is scary… but either way, it’s to late for the pebbles to vote, as they say. At least logically, I think. But perhaps we could create and send a virus to an alien mothership (or more likely, have a pathogen that proved deadly to some other life) as it were.