I write software for a living and sometimes write on substack: https://taylorgordonlunt.substack.com/
Taylor G. Lunt
Halfhaven Digest #1
I’m not sure it’s a problem that necessarily needs solving right now, just like it isn’t in Omelas. Any attempt to save the child would probably cause the utopia to collapse and lead to worse outcomes overall. Unless you can come up with a clever solution that preserves the goodness but gets rid of the badness. This is the lab-grown meat approach to animal suffering.
You’re right, sometimes people can view their actions as necessary evils, and I also wonder why sometimes people can stomach it. Maybe when a necessary evil exists to prevent an even greater, easy-to-understand evil, then it’s easier to process, as in the case of the atomic bombs.
10 Ways to Waste a Decade
To be clear, I am not one of the ones who walk away from Omelas. I think those people are naive and suicidal.
I am one of the ones who builds a nonliving effigy in my basement, finds a way to prove it works just as well as a real suffering small child, then releases my results publicly, at first anonymously.
I agree. Welcome to Omelas.
One Does Not Simply Walk Away from Omelas
Something appeals to me far more about the wobbly chair story than the dopamine addiction story. In the wobbly chair story, you spent 1 minute improving your life and didn’t have to think about it again. In the other story, it was a constant battle that required diligence for a while. You can only do so many of those kinds of things at once.
It’s good advice still. When things aren’t working, thinking them through and trying things out is a good move. I just wonder if people have any advice that’s more like the wobbly chair story. Quick, cheap, semi-permanent wins that don’t require willpower.
I’ll join! I’m sick right now so my first posts will be slapped together, but maybe that’ll put me in the right mindset.
In my view, you don’t get novel insights without deep thinking except extremely rarely by random, but you’re right to make sure the topic doesn’t shift without anyone noticing.
I think it might be worthwhile to distinguish cases where LLMs came up with a novel insight on their own vs. were involved, but not solely responsible.
You wouldn’t credit Google for the breakthrough of a researcher who used Google when making a discovery, even if the discovery wouldn’t have happened without the Google searches. The discovery maybe also wouldn’t have happened without the eggs and toast the researcher had for breakfast.
“LLMs supply ample shallow thinking and memory while the humans supply the deep thinking” is a different and currently much more believable claim than “LLMs can do deep thinking to come up with novel insights on their own.”
I can’t remember the quote, but I believe this possibility is mentioned offhand in IABIED, with the authors suggesting superhuman but still weak AI might do what we can’t and craft rather than grow another AI, to that is can ensure the better successor AI is aligned to its goals.
Let’s say I weigh 250 pounds, but I show up to boxing weigh-in with negative 100 pounds of helium balloons strapped to my back. I end up in the same weight class as 150 pound men, even though I can punch like a 250 pound man. Is that fair?
If divisions are essentially arbitrary, when is it better to go through the effort to change them, and when is it better to just say, “no, sir, you can’t weigh in with balloons on”?
I ended up doing an experiment similar to this here. Though I realized even shallow thinking is an advantage when playing board games (no matter how good your heuristics, you still have to calculate a few moves ahead using those heuristics), so I looked at the difference in performance between simple vs. complex games to try to get at deep thinking.
All models were not equally terrible, but all models were more equally terrible on complex games than on simple games.
I appreciate you’re not giving your dreams too much credence, and I’d agree that if it seems like there’s a conscious person in your dreams, it’s closer to the activation of a memory of how people are, rather than an actual person living in your head.
In fact, I’ve often wondered if we’re conscious at all when we sleep. We wake up with all these memories of having experienced things, but that’s not the same as having actually experienced things. You can have false memories, which means you can remember experiencing things you didn’t. It may be the case that (unless you’re lucid dreaming), you’re not really conscious at all while dreaming. In which case your weirdest experience really wasn’t.
Clearly some parts of your mind are switched off when you sleep. Your reasoning faculties aren’t working correctly, which is why you can be unsure whether or not you’re dreaming, but know with near certainty that you’re awake. Maybe the parts of the mind involved in consciousness are off as well?
On the other hand, I often wonder what people having a seizure experience. Opposite to people dreaming, they often report experiencing nothing. But is this true? Or do they just not remember?
I appreciate it, thanks.
Sorry about that. I definitely hacked the web interface together quickly for this experiment. I’m aware of a couple other minor bugs, which I’ll fix at some point.
Playing complex games requires shallow thinking and deep thinking. Of course the more brute forcing you can do, you’ll do better on any game.
The hypothesis was that they wouldn’t have improved as much on complex games as on simple, more brute-forceable games, which was mildly supported by the data.
LLMs Suck at Deep Thinking Part 3 - Trying to Prove It (fixed)
I have no idea. I think something happened with the editor? I’ll archive this post and re-post lol
The title, perhaps? I guess I wouldn’t blame anyone. Thank you, by the way.