I went through Gwern’s posts and collected all the posts with importance 8 and higher as of 2024-09-04 in case someone else was searching for something like this.
10
9
8
I went through Gwern’s posts and collected all the posts with importance 8 and higher as of 2024-09-04 in case someone else was searching for something like this.
10
9
8
The recent post on reliability and automation reminded me that my “textexpansion” tool Espanso is not reliable enough on Linux (Ubuntu, Gnome, X11). Anyone here using reliable alternatives?
I’ve been using Espanso for a while now, but its text expansions miss characters too often, which is worse than useless. I fiddled with Espanso’s settings just now and set the backend to Clipboard, which seems to help with that, but it still has bugs like the special characters remaining (“@my_email_shorthand” → “@myemail@gmail.com″).
In particular, I think you might need to catch many escape attempts before you can make a strong case for shutting down. (For concreteness, I mostly imagine situations where we need to catch the model trying to escape 30 times.)
So instead of leaving the race once the models start scheming against you, you keep going to gather more instances of scheming until you can finally convince people? As an outside reader of that story I’d just be screaming at the protagonists that clearly everyone can see where this is going where scheming attempt number 11 is just good enough to be successful. And in the worlds where we catch them 30 times successfully it feels like people would argue: this is clear evidence that the models aren’t “actually dangerous” yet, so let’s keep scaling “responsibly”.
There is probably a lot of variation between people regarding that. In my family meds across the board improved people’s sleep (by making people less sleepy during the day, so more active and less naps). When I reduced my medication from 70mg to 50mg for a month to test whether I still needed the full dose, the thing that was annoying the most was my sleep (waking up at night and not falling asleep again increased. Falling asleep initially was maybe slightly easier). Taking it too late in the afternoon is really bad for my sleep, though.
Things I learned that surprised me from a deep dive into how the medication I’ve been taking for years (Vyvanse) actually gets metabolized:
It says in the instructions that it works for 13 hours, and my psychiatrist informed me that it has a slow onset of about an hour. What that actually means is that after ~1h you reach 1⁄2 the peak concentration and after 13 hours you are at 1⁄2 the peak concentration again, because the half-life is 12h (and someone decided at some point 1⁄2 is where we decide the exponential starts and ends?). Importantly, this means 1⁄4 of the medication is still present the next day!
Here is some real data, which fit the simple exponential decay rather well (It’s from children though, which metabolize dextroamphetamine faster, which is why the half-life is only ~10h)
If you eat ~1-3 grams of baking soda, you can make the amount of medication you lose through urine (usually ~50%) go to 0[1] (don’t do this! Your body probably keeps its urine pH at the level it does for a reason! You could get kidney stones).
I thought the opposite effect (acidic urine gets rid of the medication quickly) explained why my ADHD psychologist had told me that the medication works less well combined with citric fruit, but no! Citric fruit actually increase your urine pH (or mostly don’t affect it much)? Probably because of the citric acid cycle which means there’s more acid leaving as co2 through your lungs? (I have this from gpt4 and a rough gloss over details checked out when checking Wikipedia, but this could be wrong, I barely remember my chemistry from school)
Instead, Grapefruit has some ingredients known to inhibit enzymes for several drugs, including dextroamphetamine (I don’t understand if this inhibitory effect is actually large enough to be a problem yet though)
This brings me to another observation: apparently each of these enzymes is used in >10-20% of drugs: (CYP3A4/5, CYP2D6, CYP2C9). Wow! Seems worth learning more about them! CYP2D6 gets actually used twice in the metabolism of dextroamphetamine, once for producing and once for degrading an active metabolite.
Currently still learning more about basics about neurotransmitters from a textbook, and I might write another update once/if at the point where I feel comfortable writing about the effects of dextroamphetamine on signal transmission.
Urinary excretion of methylamphetamine in man (scihub is your friend)
Looking forward to the rest of the sequence! On my current model, I think I agree with ~50% of the “scientism” replies (roughly I agree with those relating to thinking of things as binary vs. continuous, while I disagree with the outlier/heavy-tailed replies), so I’ll see if you can change my mind.
The technical background is important, but in a somewhat different way than I’d thought when I wrote it. When I was writing it, I was hoping to help transmit my model of how things work so that people could use it to make their own decisions. I still think it’s good to try to do this, however imperfectly it might happen in practice. But I think the main reason it is important is because people want to know where I’m coming from, what kinds of things I considered, and how deeply I have investigated the matter.
Yes! I think it is beneficial and important that someone who has a lot of knowledge about this transmits their model on the internet. Maybe my Google foo is bad, but I usually have a hard time finding articles like this when there doesn’t happen to be one on Lesswrong (only can think of this counterexample I remember finding reasonably quickly).
Raising children better doesn’t scale well. Neither in how much ooomph you get out of it per person, nor in how many people you can reach with this special treatment.
What (human or not) phenomena do you think are well explained by this model? I tried to think of any for 5 minutes and the best I came up with was the strong egalitarianism among hunter gatherers. I don’t actually know that much about hunter gatherers though. In the modern world something where “high IQ” people are doing worse is sex, but it doesn’t seem to fit your model.
So on the -meta-level you need to correct weakly in the other direction again.
I used Alex Turners entire shortform for my prompt as context for gpt-4 which worked well enough to make the task difficult for me but maybe I just suck at this task.
By the way, if you want to donate to this but thought, like me, that you need to be an “accredited investor” to fund Manifund projects, that only applies to their impact certificate projects, not this one.
My point is more that ‘regular’ languages form a core to the edifice because the edifice was built on it, and tailored to it
If that was the point of the edifice, it failed successfully, because those closure properties made me notice that visibly pushdown languages are nicer than context-free languages, but still allow matching parentheses and are arguably what regexp should have been built upon.
My comment was just based on a misunderstanding of this sentence:
The ‘regular’ here is not well-defined, as Kleene concedes, and is a gesture towards modeling ‘regularly occurring events’ (that the neural net automaton must process and respond to).
I think you just meant that there’s really no satisfying analogy explaining why it’s called ‘regular’. What I thought you imply is that this class wasn’t crisply characterized then or now in terms of math (it is). Thanks to your comment though, I noticed a large gap in the CS-theory understanding I thought I had. I thought that the 4 levels usually mentioned in the chomsky hierarchy are the only strict subsets for languages that are well characterized by a grammar, an automaton and a a whole lot of closure properties. Apparently the emphasis on these languages in my two stacked classes on the subject 2 years ago was a historical accident? (Looking at wikipedia, visibly pushdown languages allow intersection, so from my quick skim more natural than context-free languages). They were only discovered in 2004, so perhaps I can forgive my two classes on the subject to not have included developments 15 years in the past. Anyone has post recommendations for propagating this update?
I noticed some time ago there is a big overlap between lines of hope mentioned in Garret Baker’s post and lines of hope I already had. The remaining things he mentions are lines of hope that I at least can’t antipredict which is rare. It’s currently the top plan/model of Alignment that I would want to read a critique of (to destroy or strengthen my hopes). Since no one else seems to have written that critique yet I might write a post myself (Leave a comment if you’d be interested to review a draft or have feedback on the points below).
if singular learning theory is roughly correct in explaining confusing phenomena about neural nets (double descent, grokking), then the things confusing about these architectures are pretty straightforward implications from probability theory (Implying we might expect fewer diffs in priors between humans and neural nets because biases are less architecture dependent).
the idea of whether something like “reinforcing shards” can be stable if your internals are part of the context during training even if you don’t have perfect interpretability
The idea that maybe the two ideas above can stack? If for both humans and AI training data is the most crucial, then perhaps we can develop methods comparing human brains and AI. If we get to the point of being able to do this in detail (big If, especially on the neuroscience side this seems possibly hopeless?), then we could get further guarantees that the AI we are training is not a “psychopath”.
Quite possibly further reflection feedback would change my mind and counterarguments/feedback would be appreciated. I am quite worried about motivated reasoning to think this plan is better than I think because it would give me something tractable to work on. Also to which extent people planning to work on methods that should be robust enough to survive a sharp left turn are pessimistic about lines of research like this only because of the capability externalities. I have a hard time evaluating the capability externalities of publishing research on plans like the above. If someone is interested in writing a post about this or reading it feel free to leave a comment.
Aren’t regular languages really well defined as the weakest level in the Chomsky Hierarchy?
Would it change your mind if gpt-4 was able to do the grid tasks if I manually transcribed them to different tokens? I tried to manually let gpt-4 turn the image to a python array, but it indeed has trouble performing just that task alone.
That propagates into a huge difference in worldviews. Like, I walk around my house and look at all the random goods I’ve paid for—the keyboard and monitor I’m using right now, a stack of books, a tupperware, waterbottle, flip-flops, carpet, desk and chair, refrigerator, sink, etc. Under my models, if I pick one of these objects at random and do a deep dive researching that object, it will usually turn out to be bad in ways which were either nonobvious or nonsalient to me, but unambiguously make my life worse and would unambiguously have been worth-to-me the cost to make better.
Based on my 1 deep dive on pens a few years ago this seems true. Maybe that is too high dimensional and too unfocused a post, but maybe there should be a post on “best X of every common product people use every day”? And then we somehow filter for people with actual expertise? Like for pens you want to go with the recommendations of “the pen addict”.
For concreteness. In this task it fails to recognize that all of the cells get filled, not only the largest one. To me that gives the impression that the image is just not getting compressed really well and the reasoning gpt-4 is doing is just fine.
It not being linked on Twitter and Facebook seems more like a feature than a bug, given that when I asked Gwern why a page like this doesn’t already exist, he wrote me he doesn’t want people to mock it.