Annoying highly galaxy brained consideration, but pretty plausibly, considerations about how big the cosmic endowment is are dwarfed by considerations about how big our logical endowment is, eg, I would probably prefer to be in a much smaller but much simpler universe than in a bigger but more complicated universe for acausal trade reasons.
Ronny Fernandez
Lesswrong Liberated
Curated. This is some of my favorite LLM sci-fi I have ever read. The mystery is extremely captivating. Took me several reads to really get what was going on, and talking to models about it was pretty rewarding. I think it will read to some as LLM generated, but I am fairly confident that no LLM was involved in the writing of this piece. I have rarely felt so bad for a character as I felt for this fellow, and I think the situation they find themselves in is an interesting puzzle of the kind that makes ratfic great.
LessOnline ticket sales are live! (Earlybird pricing until April 7)
Curated. Conceptually building on The Rise of Parasitic AI seems worth doing. It’s a potentially important phenomenon that may end up playing a big part in how the coming century plays out. It’s reminiscent of this section of Christiano’s “What Failure Looks Like”. Exploring the extent to which we can bring an existing and mature discipline’s concepts and models to bear on the phenomenon is a great approach.
I appreciate caching out that process in terms of what predictions we should make if the approach makes sense. I think it is unlikely that this particular approach ends up being very fruitful, but only because every conceptual approach to a new kind of problem is unlikely to end up being very fruitful.
I hope you continue to try finding plausible ways to apply the concepts and models from successful, mature disciplines to bear on the sorts of problems we tend to care about around here.
Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I’ve not returned it to your drafts, because that would make the comments hard to access.
David, please don’t post more direct LLM output, or we’ll remove your posting permissions.
Sorry, I did not realize you were joking
I think you should get better at distinguishing assertions from other kinds of speech acts that people make using the indicative mood.
Fwiw, I currently have not been at all convinced that I made any mistake in curating this post or in the content of the associated curation notice. Sure seems like there’s a lot of people who feel quite strongly however, and I would be interested in hearing more arguments.
This is possibly my favorite LLM scifi I have ever read. Extremely engaging.
Curated. Excited to see you there next year!
I appreciated this case study in a conference that apparently survives without any effort on anyone’s part to make it feel real. It is real, insofar as any of us knows, purely because we all agree that it is real (in the sense of “real” that means “something worthwhile happens here,” not “exists somewhere in spacetime”). I learned something here about the extent to which Schelling points can be surprisingly arbitrary, self-sustaining coordination feedback loops, and I will be on the lookout for other examples. I would have liked if the author had coined a term for such Schelling points. Another interesting feature of this example is that it seems few people noticed that this sort of feedback loop was the only thing that made the conference seem real. I suspect that a surprisingly large number of widely adopted human practices have much of this character to them.
Edited to add in order to appease my fellow mods: Btw, there is not an enormous organism that lives under California.
Curated. I have recently significantly increased my probability that I will be living alone for the foreseeable future, and so I found this post personally timely and inspiring. I would love to see more explorations of abandoned tech timelines in future posts. I appreciate that without this post, the internet may have only ever seen Marie T. Smith as the lady on the cover of the saddest cooking book ever. It is right and good that there should also be some public treatment of her heroic and groundbreaking experiments in the culinary arts.
What would a class aimed at someone like me (read lesswrong for many years, familiar with the basics of LLM architecture and learning to some extent) have to cover to get me up to speed on AI futurism by your lights? I am imagining the output here being like a bulleted list of 12-30 broad thingies.
Curated. This feels like an obvious idea (at least in retrospect), and I haven’t seen anyone else discuss it. The fact that you ran experiments and got interesting results puts this above my bar for curation.
I also appreciated the replies comparing it to ELK and debate paradigms. I’d love to see more discussion in the comments about how it relates to ELK.
I’m not very optimistic about this scaling to smarter models in domains where solutions are harder to verify, but I’m not confident in that take, and I hope I’m wrong. Either way, it likely helps with current models in easier-to-verify domains, and it seems like the implementation is close to ready, which is pretty cool.
Yeah this always bothered me. And worse “expected value” isn’t about “value” as in what matters terminally, it’s about “value” as in quantity.
Let me know if you wanna go to a sports bar and interact with some common folk some time.
wow
Ahhh, I see there was already a Wei-Dai post about this referenced in the comments.