You see either something special, or nothing special.
Rana Dexsin
![[Pasted image 20240404223131.png]]
Oops? Would love to see the actual image here.
Not LLMs yet, but McDonalds is rolling out automated order kiosks,
That Delish article is from 2018! (And tangentially, I’ve been using those as my preferred way to order things at McDonald’s for a long while now, mostly because I find digital input far crisper than human contact through voice in a noisy environment.)
The subsequent “Ingrid Jacques” link goes to a separate tweet that links to the Delish article, but it’s not the Ingrid Jacques tweet, which itself is from 2024. I think the “tyson brody” tweet it links to instead might be a reply to the Ingrid Jacques one, but if so, that’s hidden to me, probably because I’m not logged in.
My (optimistic?) expectation is that it ends up (long run) a bit like baking.
Home/local music performance also transitioned into more of a niche already with the rise of recorded music as a default compared to live performance being the only option, didn’t it?
At that scale its more “celebrity-following”, but that is also something the AI would not have—I don’t know how big a deal that is.
While I doubt it will be the same thing for a transformer-era generative model due to the balance of workflow and results (and the resultant social linkages) being so different, it seems worth pointing out as a nearby reality anchor that virtual singers have had celebrity followings for a while, with Hatsune Miku and the other Crypton Future Media characters being the most popular where I tread. In fact, I fuzzily remember a song said to be about a Vocaloid producer feeling like their own name was being neglected by listeners in favor of the virtual singer’s (along the lines of mentally categorizing the songs as “Miku songs” rather than “(producer) songs”), but I can’t seem to source the interpretation now to verify my memory; it might’ve been “Unknown Mother-Goose”.
I don’t see it as an elephant overall, but I can see how you could push it to be the head of one: the head is facing to the right, the rightmost curve outlines the trunk, the upper left part of the main ‘object’ is an ear, and some of the vertical white shapes in the bottom left quadrant can be interpreted as tusks.
While I appreciate the attempt to bring in additional viewpoints, the “Sign-in Required” is currently an obstacle.
I claim that very few people actually understand what they are using and what it effects it has on their mind.
How would you compare your generative-AI focus to the “toddlers being given iPads” transition, which seems to have already happened?
This SMBC from a few years ago including an “entropic libertarian” probably isn’t pointing at what people call “e/acc”… right? My immediate impression is that it rhymes though. I’m not sure how to feel about that.
The first sentence here is very confusing and I think inverts a comparison—I think you mean “would make the world enough worse off”.
The first somewhat contrary thing that comes to mind here is whether visible spending that looks like a status grab or is class-dissonant would also impact your social capital in terms of being able to source (loaned or gifted) money from your networks in case of a crunch or shock. If your friends will feel “well I sure would’ve liked to have X, but I was the ‘responsible’ one and you weren’t, so now I’m not going to put money in when you’re down” and that’s what you rely on as a safety net, then maybe you do need to pay attention to that kind of self-policing. If you’re reliant on less personal sources of credit, insurance, etc. or if your financially relevant social groups are themselves receptive to your ideas on not caring as much about class policing, then the self-policing can be mainly self-sabotage like you say.
Facepalm at self. You’re right, of course. I think I confused myself about the overall context after reading the end-note link there and went off at an angle.
Now to leave the comment up for history and in case it contains some useful parts still, while simultaneously thanking the site designers for letting me un-upvote myself. 😛
(Epistemic status: mostly observation through heavy fog of war, partly speculation)
From your previous comment:
The “educated savvy left-leaning online person” consensus (as far as I can gather) is something like: “AI art is bad, the real danger is capitalism, and the extinction danger is some kind of fake regulatory-capture hype techbro thing which (if we even bother to look at the LW/EA spaces at all) is adjacent to racists and cryptobros”.
So clearly you’re aware of / agree with this being a substantial chunk of what’s happening in the “mass social media” space, in which case…
Given this, plus anchoring bias, you should expect and be very paranoid about the “first thing people hear = sets the conversation” thing.
… why is this not just “お前はもう死んでいる” (that is, you are already cut off from this strategy due to things that happened before you could react) right out of the gate, at least for that (vocal, seemingly influential) subpopulation?
What I observe in many of my less-technical circles (which roughly match the above description) is that as soon as the first word exits your virtual mouth that implies that there’s any substance to any underlying technology itself, good or bad or worth giving any thought to at all (and that’s what gets you on the metatextual level, the frame-clinging plus some other stuff I want to gesture at but am not sure whether that’s safe to do right now), beyond “mass stealing to create a class divide”, you instantly lose. At best everything you say gets interpreted as “so the flood of theft and soulless shit is going to get even worse” (and they do seem to be effectively running on a souls-based model of anticipation even if their overt dialogue isn’t theistic, which is part of what creates a big inferential divide to start with). But you don’t seem to be suggesting leaning into that spin, so I can’t square what you’re suggesting with what seem to be shared observations. Also, the less loud and angry people are still strongly focused on “AI being given responsibility it’s not ready for”, so as soon as you hint at exceeding human intelligence, you lose (and you don’t then get the chance to say “no no, I mean in the future”, you lose before any further words are processed).
Now, I do separately observe a subset of more normie-feeling/working-class people who don’t loudly profess the above lines and are willing to e.g. openly use some generative-model art here and there in a way that suggests they don’t have the same loud emotions about the current AI-technology explosion. I’m not as sure what main challenges we would run into with that crowd, and maybe that’s whom you mean to target. I still think getting taken seriously would be tricky, but they might laugh at you more mirthfully instead of more derisively, and low-key repetition might have an effect. I do kind of worry that even if you start succeeding there, then the x-risk argument can get conflated with the easier-to-spread “art theft”, “laundering bias”, etc. models (either accidentally, or deliberately by adversaries) and then this second crowd maybe gets partly converted to that, partly starts rejecting you for looking too similar to that, and partly gets driven underground by other people protesting their benefiting from the current-day mundane-utility aspect.
I also observe a subset of business-oriented people who want the mundane utility a lot but often especially want to be on the hype train for capital-access or marketing reasons, or at least want to keep their friends and business associates who want that. I think they’re kind of constrained in what they can openly say or do and might be receptive to strategic thinking about x-risk but ultimately dead ends for acting on it—but maybe that last part can be changed with strategic shadow consensus building, which is less like mass communication and where you might have more leeway and initial trust to work with. Obviously, if someone is already doing that, we don’t necessarily see it posted on LW. There’s probably some useful inferences to be drawn from events like the OpenAI board shakeup here, but I don’t know what they are right now.
FWIW, I have an underlying intuition here that’s something like “if you’re going to go Dark Arts, then go big or go home”, but I don’t really know how to operationalize that in detail and am generally confused and sad. In general, I think people who have things like “logical connectives are relevant to the content of the text” threaded through enough of their mindset tend to fall into a trap analogous to the “Average Familiarity” xkcd or to Hofstadter’s Law when they try truly-mass communication unless they’re willing to wrench things around in what are often very painful ways to them, and (per the analogies) that this happens even when they’re specifically trying to correct for it.
You’re right; I’d forgotten about the indicator. That makes sense and that is interesting then, huh.
and I’m faintly surprised it knows so much about it
GPT-4 via the API, or via ChatGPT Plus? Didn’t they recently introduce browsing to the latter so that it can fetch Web sources about otherwise unknown topics?
The porno latent space has been explored so thoroughly by human creators that adding AI to the mix doesn’t change much.
Something about this feels off to me. One of the salient possibilities in terms of technology affecting romantic relationships, I think, is hyperspecificity in preferences, which seems like it has a substantial social component to how it evolves. In the case of porn, with (broadly) human artists, the r34 space still takes a substantial delay and cost to translate a hyperspecific impulse into hyperspecific porn, including the cost of either having the skills and taking on the workload mentally (if the impulse-haver is also the artist) or exposing something unusual plus mundane coordination costs plus often commission costs or something (if the impulse-haver is asking a different artist).
With interactively usable, low-latency generative AI, an impulse-haver could not only do a single translation step like that much more easily, but iterate on a preference and essentially drill themselves a tunnel out of compatibility range. No? That seems like the kind of thing that makes an order-of-magnitude difference. Or do natural conformity urges or starting distributions stop that from being a big deal? Or what?
Having written that, I now wonder what circumstances would cause people to drill tunnels toward each other using the same underlying technology, assuming the above model were true…
The public continued to react as they have to AI for the past year—confused, fearful, and weary.
Confirm word: “weary” or “wary”? Both are plausible here, but the latter gels better with the other two, so it’s hard to tell whether it’s a mistake.
That last reminds me of Gwern’s “The Melancholy of Subculture Society” with regard to creating a profusion of smaller status ladders to be on.
The Latin noun “instauratio” is feminine, so “magna” uses the feminine “-a” ending to agree with it. “forum” in Latin is neuter, so “magnum” would be the corresponding form of the adjective. (All assuming nominative case.)
I’m not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I disagree with the last paragraph and think that “Normally” is misleading as stated in the OP; I think it’s clear when talking about numbers in a general sense that issues with representations of numbers as used in computers aren’t included except as a side curiosity or if there’s a cue to the effect that that’s what’s being discussed.
If (as I suspect is the case) one of the in-practice purposes or benefits of a limit is to make it harder for an escalation spiral to continue via comments written in a heated emotional state, delaying the reading destroys that effect compared to delaying the writing. If the limited user is in a calm state and believes it’s worth it to push back, they can save their own draft elsewhere and set their own timer.