Algon
Oh, I didn’t realize you could just paste latex into the LW editor and it autoparses, I thought you had to use ctrl+4 to summon a block in which you write math equations.
Good job on finding the wikipedia page for this! I didn’t know what it was called, I just read about it in Landau vol. 1 many years ago.
And if I had to typset the equations I probably wouldn’t have written this quick take.
Perhaps my favourite relation in physics is
t/T = (l/L)^{1-k/2}.
This says that for a bunch of particle in a potential V = a x^k, if you let the system evolve over time T forming a path which has size L in some sense, then there is another path which is a re-scaled version of the original one s.t. if it has size l, then the time taken to form this new path is t.
We can use this trick to create a bunch of “scaling laws” for simple physical systems. For example:
1) Let V = a x ^{-1} i.e. a gravitational potential. Then we have k=-1, so
t/T = (l/L)^{3/2}
(t/T)^{2} = (l/L)^{3}
I.e. Kepler’s third law. In fact, this is a more general result because it also applies to particles falling into a potential well. If you increase the distance the particle takes to fall in, you can use this equation to tell you how much the time increases, too.
2) V = a x^{2} i.e. the potential of an oscillator. Then we have k=2, so
t/T = (l/L)^{0}
That is, we can take an oscillation and increase the amplitude without changing the time it takes to complete the oscillation.
3) V = a x i.e. potential of a constant force. Then we have k=1, so
t/T = (l/L)^{1/2}
(t/T )^{2}= (l/L)
This reflects the fact that the position of a particle experiences constant force instreases like t^2,
So you can see you get a lot of mileage out of this pretty simple equation.
A complicated plot about a complicated plot.
I disagree, and think your analogy to MS Word may be where the crux lies. We could only build MS word because it relies on a bunch of simple, repeated abstractions that keep cropping up (e.g. parsers, rope data structures etc.) in combination with a bunch of random, complex crud that is hard to memorize. The latter is what you’re pointing at, but that doesn’t mean there aren’t a load of simple, powerful abstractions underyling the whole thing which, if you understand them, you can get the program to do pretty arbitrary things. Most of the random high complexity stuff is only needed locally, and you can get away with just understanding the bulk structure of a chunk of the program and whatever bits of trivia you need to accomplish whatever changes you want to make to MS Word.
This is unlike the situation with LLMs, which we don’t have the ability to create by hand, or to seriously understand an arbitrary section of its functionality. Thoughmaaaaybe we could manage to engineer something like GPT-2 right now but I’d bet against that for GPT-3 onwards.
When this works, it really works; I have seen Claude perform some pretty remarkable feats while inside this kind of “information-rich on-rails experience,” ones that impressed me much more than any of the high-autonomy agentic one-shotting stuff that the hype is focused on
Could you give an example?
Your explanation has left me wondering how much of the work done in achieving these feats is you providing the right context. Certainly, when I’m solving problems, a lot of the work is finding the right context.
No concentration of the target audience; it doesn’t seem like many people interested in this kind of fiction would be hanging out there.
I disagree! Of the ratfic fans I know, many have frequented royal road, myself included.
Dry Customer Satisfaction
I will assign him to our tier C high-value client list, based solely on his facial symmetry score and wealth.
Wet Customer Satisfaction
A complimentary dessert would be very cute. And wait, what if we just deliver one so they have to share it and maybe with only one spoon and he could start eating it teasingly, pretending he’s going to take it all for himself and then offer it to her gallantly or they could have a flirtatious argument about who deserves it more or-
The worst thing about AI sycophancy is that it works.
People have trouble accepting conversational bids, even ones that seem obvious. I’ve seen cases where person A asks person B the same dang question 10 times throughout a conversation and B just can’t seem to give a response that’s in any way related to the question that was asked. Often, I feel like humans just confabulate vaguely plausible looking responses in a conversation to be used as a filler until they can opine on what they want to talk about. A less cynical explanation is that people aren’t making the conversational bids they appear to be making e.g. a complaint about a problem is a bid not for solutions but for commiseration.
Still, I think there is some merit to the idea that people are actually kind of bad at parsing and/or responding to conversational bids. Which is why it’s striking when someone is laser-focused on what you say and gives cogent responses to the bids you make. Equally striking is being that someone focusing on understanding and following along with another person’s motions. It requires building up intense curiosity in what’s going on in someone else’s mind, and frankly it feels intimate. Would recommend trying it out.
Opus was harranguing me about the effortless updates point, as it was the shakiest claim in the essay. Anyway, I stand by it, with the caveat that once you fully understand a point, an update should be effortless. If it isn’t, and if you’re reluctant to do the update s.t. your updates don’t look like a random walk, it’s likely that you’re being pressured into doing one anyway. This is what I’d call a fake update.
As to the example you gave, well: if you need to rely on expert consensus, then you don’t actually understand the point being made. And if you need to think about if for a while, again, you don’t understand the point being made.
This is, I think, a fairly weak position because I’m ignoring all of the sweat that goes into understanding a concept, which in and of itself can require building a lot of new cognitive structures and links between thoughts. Calling those changes updates seems sensible to me, and now I think I’ve argued myself out of my initial claim. Some updates are effortless, perhaps even most, but many aren’t.
Sure, but I was assuming some sort of magical growth process which could make identical brains, perhaps some form of nanotech. I realize that’s a ridiculous ask and doesn’t reduce the difficulty of the problem in any way. Heck, it increases it. But that’s the only thing I could think of that would definitely produce identical (upto connectome) copies of brains with less than 5 minutes of thought.
The images above are taken from the BPF’s Accreditation page. On the left, you can see the pig brain which I preserved, winning the Large Mammal prize. The cellular structure is intact and it’s easy to trace the connections between the neurons. The right-hand image shows the damage caused by traditional cryopreservation, even under ideal circumstances. Real preservation cases are far worse due to pre- and post-mortem brain damage. Maybe a superintelligence could reconstruct the structure – but it’s unclear whether the information to do so remains.
There’s got to be some way to see how much info we can reconstruct from brains which were cryopreserved a few hours after death. Perhaps if we could grow brains in a vat, we could make two copies of the same brain, and cryopreserve one brain properly and let another degrade before cryopreserving it. Then the former copy serves as a ground truth for reconstruction attempts. Of course, that’s replacing one hard problem with another hard problem. But it seems worth spending more than literally 5 minutes on this problem.
resolving disagreements so quickly that you don’t even notice them as disagreements
I wanted to strong up-vote this post, so I opened it up and saw I’d already strong up-voted it. Now I wish I could strong up-vote it again.
Michael means “Who is like El [God]?” and in the elect, Michael had power over Him, placing him in a position of the highest authority, like El. This is not a coincidence, because nothing is ever a coincidence, least of all when you’re making Kabbalistic inferences about a short story on LW written by a rat.
I don’t think we disagree. To say a bit more about my thinking here, let’s take the very rich as one example of unusual people. The very rich mostly got where they are by being really exceptional in one area. Otherwise, they’re not that different from people you actually know. Probably, you know someone who’s got pretty similar psychology to them, absent one or two idiosyncratic traits/quirks e.g. Seymour Cray, who believed machine elves told them to build super computers and thought it was a good idea to listen to them. Probably, you know someone who has crazy supernatural beliefs like that, except their beliefs aren’t as adaptive nor are they that competent. The remaining differences can largely be attributed to the difference in contexts between Seymour Cray and that crazy person you know.
Like, what I’m getting at here is that an unusual person is just a relatively minor neurological variant on some guy you probably know, who was placed in a different context. If their positions were swapped, they’d behave more similarly than would be credited by people who believe the super rich are inhuman demons or whatever.
Sure, but lots of people co-ordinate to do bad things. E.g. drug traders, groomers etc. So I expect some rich people will get up to this stuff, too.
I haven’t read the Epstein Files, so take this with a lump of salt. But from my twitter feed, I’d say it hasn’t changed my mind much. They’re just people, right? Like, a couple percent of children get sexually abused, and a disturbingly large fraction get raped, so most people probably know a child abuse victim. They may know a paedophile and even think well of them. And most likely, they don’t know they know these people. Generalize this to all sorts of behaviours, and I think you’ll find the global elite aren’t that different from the average person.
And honestly, this realization feels like a bit of a superpower. “That super high status dude over there? Yeah, he’s just some guy.” It feels like taking off starry eyed or grim-dark goggles and looking at a person, not a cariacature.
AI Safety at frontier labs is essentially a bunch of shallow instincts/behaviors covering up an ultimately Pythian power-maximizing entity
You could replace “AI safety at frontier labs” with “pro-social policy at powerful organizations” and this sentence would probably still be true, no?
Over the last four years, has anything happened that actually contradicted this model? An event where an AGI lab actually did something in the name of safety that meaningfully cost it? Something that didn’t predictably end up working out to instead boost the lab’s PR/fundability and improve its products, or wasn’t so cheap for a lab to do as to not worth the attention of its Pythian core?
What could they have done otherwise? If I had to venture an example, I’d say ” any support for legislation binding them to their (stated) voluntary commitments”.
I’d say ratfics are more about becoming God, and as God you can naturally Fix the world. So you can view rats as atheists who believe that since God doesn’t exist, we must build Him.
Edit: Really, ratfics are about becoming more you are, with becoming God as the natural limit.