The elites, liberals, 1%, venture capitalists, or other hated out-group are pretending to act in your interest while only trying to get more power for themselves.
is probably a viable meme, but is slightly problematic because of how fractured social groups would need to assign the “control of AI progress” to different and potentially contradictory hated out-groups.
I’m also a big fan of the metaphor:
Big AI companies are gambling with our future. We’ve already won a lot of chips by betting on AI investments and if we cashed out now everyone could benefit, but they keep betting on AGI/ASI instead of using our existing AI technology to benefit people.
I think leaving out what “winning” and “losing” means in the metaphor is good because it avoids devolving into arguments about what futures are and are not possible, and (hopefully) attention can instead be drawn to the need to shift investment away from “the AGI bubble” and towards paying actual humans to use existing AI technology to solve the actual problems we are facing now. Ideally this would include a distinction between LLMs and applied data science with ML but that’s probably not relevant for general audiences, though it may be relevant for people who are upset about genAI taking peoples jobs.
On a more meta note, I want more memes that get people to be more ok dealing with uncertainty and probability… especially if we could promote jargon for people to better treat their worldviews as objects they work with rather than realities they live in… cause the reality they live in is, unfortunately, beyond their ability to perfectly know.
I think that would help alleviate the selection pressure for a resolution to the tension, if people could feel ok being uncertain about AI and instead of wanting to work for or against AI they could instead want to work against uncertainty, and not by filling the gap with the first available answer, but by promoting people actually studying and understanding things.
Does anyone have candidates for jargon that could work for that? I think it needs to be a lot simpler and easier to work with than actual use of probability theory jargon.
I have no candidates but I agree with this wholeheartedly. I think my ideal future is precisely to pause progress on AI development at the level we currently have now, and to appreciate all of the incredible things and good that the current generation of tools are already able to accomplish. Even compared to just where we were half a decade ago (2020), the place where we are now with respect to coding agents and image generation and classification feels like incredible sci-fi.
I think this would be a really important thing to work on: trying to find potent memes that can be used to spread this idea of cashing out now before we inevitably lose our gains to the house.
I think some form of:
The elites, liberals, 1%, venture capitalists, or other hated out-group are pretending to act in your interest while only trying to get more power for themselves.
is probably a viable meme, but is slightly problematic because of how fractured social groups would need to assign the “control of AI progress” to different and potentially contradictory hated out-groups.
I’m also a big fan of the metaphor:
Big AI companies are gambling with our future. We’ve already won a lot of chips by betting on AI investments and if we cashed out now everyone could benefit, but they keep betting on AGI/ASI instead of using our existing AI technology to benefit people.
I think leaving out what “winning” and “losing” means in the metaphor is good because it avoids devolving into arguments about what futures are and are not possible, and (hopefully) attention can instead be drawn to the need to shift investment away from “the AGI bubble” and towards paying actual humans to use existing AI technology to solve the actual problems we are facing now. Ideally this would include a distinction between LLMs and applied data science with ML but that’s probably not relevant for general audiences, though it may be relevant for people who are upset about genAI taking peoples jobs.
On a more meta note, I want more memes that get people to be more ok dealing with uncertainty and probability… especially if we could promote jargon for people to better treat their worldviews as objects they work with rather than realities they live in… cause the reality they live in is, unfortunately, beyond their ability to perfectly know.
I think that would help alleviate the selection pressure for a resolution to the tension, if people could feel ok being uncertain about AI and instead of wanting to work for or against AI they could instead want to work against uncertainty, and not by filling the gap with the first available answer, but by promoting people actually studying and understanding things.
Does anyone have candidates for jargon that could work for that? I think it needs to be a lot simpler and easier to work with than actual use of probability theory jargon.
I have no candidates but I agree with this wholeheartedly. I think my ideal future is precisely to pause progress on AI development at the level we currently have now, and to appreciate all of the incredible things and good that the current generation of tools are already able to accomplish. Even compared to just where we were half a decade ago (2020), the place where we are now with respect to coding agents and image generation and classification feels like incredible sci-fi.
I think this would be a really important thing to work on: trying to find potent memes that can be used to spread this idea of cashing out now before we inevitably lose our gains to the house.