This seems to me like it was partly generated or rewritten by an LLM. Is that correct?
Yair Halberstadt
Your complaining about how the graph is drawn, and hope to fix that by drawing a graph that is almost certainly wrong? At least the graph they drew only relies on actual past data.
I agree they would do better to acknowledge that whilst the growth is currently exponential, it will have to stop at some point but we have no idea when. That gets a bit tiring after a while though.
Because it has the wrong shape in every way that matters if they draw an s curve with us at the halfway point (which seems to be the natural failure mode), and they’re actually at the 1 percent mark. The s curve isn’t particularly illuminating over simply saying this exponential will stop at some point but we don’t know when, but unfortunately tends to lend itself to overconfident predictions.
Whilst technically true, those who attempt to use this fact to predict future growth tend to end up just as wrong as those who don’t.
All exponentials end, but if you don’t know when you need to prepare equally for the scenario where it ends tomorrow, and the one where it ends with the universe in paperclips.
I think that it’s good to think concretely about what multiverse trading actually looks like, but I think problem 1 is a red herring—Darwinian selective pressure is irrelevant where there’s only one entity, and ASIs should ensure that at least over a wide swathe of the universe there is only one entity. At the boundaries between two ASIs if defence is simpler than offense there’ll be plenty of slack for non-selective preferences.
My bigger problem is that multiverse acausal trade requires that agent A in universe 1, can simulate that universe 2 exists, with agent B, which will simulate agent A in universe 1. Which is not theoretically impossible (if for example the amount of available compute increases without bound in both universes, or if it’s possible to prove facts about the other universe without needing to simulate the whole thing), but does seem incredibly unlikely—and almost certainly not worth the cost required to attempt to search for such an agent.
Currently on 9%
That’s the level where manifold can’t really tell you about exact probability because betting no ties up a lot of capital for minimal upside.
Also by 2026 I’d expect to have GPT 4 level LLMs with 1/10th the parameter count just due to algorithmic improvements (maybe I’m wildly wrong here) so doing the same but with different architecture isn’t necessarily as indicative as it seems.
GDM also claims IMO gold medal
I don’t know what is in theory the best possible life I can live, but I do know ways that I can improve my life significantly.
Don’t fight your LLM, redirect it!
This is definitely true in software development. Ignore the hucksters selling clean code and agile software development or what not, and focus on blog posts by real practicing developers describing how they solved real world problems and the tradeoffs they faced.
https://www.scattered-thoughts.net/writing/on-bad-advice/ is a great post on this.
If Not Now, When?
I imagine that one way to reduce the way the financial impact of working 80% is to wait till you get a pay rise, (or move to a higher paying role at a new company), and make the switch at the same time, so you never feel like you’re financially worse off than you were before.
Is a big sign that if there is something here, it’s likely to be discovered. We’re likely to find out in the next few years of this is the future of general purpose AI.
Thanks so much! This is precisely the sort of answer I was looking for!
Ok, yes that makes a lot more sense—whilst tarnishing by association increases incentives to point out flaws in your friend, it decreases incentives to point out flaws in your friend’s friend.
And since most of your friends are also your friends’ friends, the aggregate impact is to decrease incentives to point out flaws in your friends as well.
That sounds exactly like what I was saying: the reason insiders don’t criticise other insiders isn’t because it reduces their status by association. It’s that other insiders don’t like it, and they want to stay insiders.
I actually think this mostly goes the other way:
Generally people aren’t judged for associating with someone if they whistleblow that they’re doing something wrong. But anyone who doesn’t whistleblow might still be tarnished by association. So this creates an incentive to be the first to publicly report wrongs.
Now you appear to only be talking about small wrongs, with the idea being that you still want to associate with that person, hence whistleblowing wouldn’t save you. But there’s already a very strong incentive in such cases not to whistleblow, namely that you want to stay friends. So I’m not sure the additional impact on your reputation makes much impact beyond that.
~1% of the world dies every year. If we accelerate AGI sooner 1 year, we save 1%. Push back 1 year, lose 1%. So, pushing back 1 year is only worth it if we reduce P(doom) by 1%.
That would imply that if you could flip a switch which 90% chance kills everyone, 10% chance grants immortality then (assuming there weren’t any alternative paths to immortality) you would take it. Is that correct?
Thanks for clarifying, please keep in mind LessWrongs policy on AI generated content: https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong