There is still a real phenomenon where people spend a lot to buy things that are poor quality instead of longer lasting higher quality things. At the extreme, this is paying rent instead of buying and building equity, or buying consumable goods instead of investments, or working jobs instead of building passive income—and those are things that use money instead of building up generational wealth.
Davidmanheim
I strongly second a number of the recommendations made here about who to reach out to and where to look for more information. If you’re looking for somewhere to donate, the Long Term Futures Fund is an underfunded and very effective funding mechanism. (If you’d like more control, you could engage with the Survival and Flourishing Fund, which has a complex process to make recommendations.)
My headcannon for the animals was that early on, they released viruses that genetically modified non-human animals in ways that don’t violate the pact.
I didn’t think the pact could have been as broad as “the terrestrial Earth will be left unmodified,” because the causal impact of their actions certainly changed things. I assumed it was something like “AIs and AI created technologies may not do anything that interferes with humans actions on Earth. or harms humans in any way”—but genetic engineering instructions sent from outside of the earth, assumedly pre-collapse, didn’t qualify because they didn’t affect human, they made animals affect humans, which was parsed as similar to impacts of the environment on humans, not an AI technology.
Yes, except that as soon as AI can replace the other sources of friction, we’ll have a fairly explosive takeoff; he thinks these sources of friction will stay forever, while I think they are currently barriers, but the engine for radical takeoff isn’t going to happen via traditional processes adopting the models in individual roles, it will be via new business models developed to take advantage of the technology.
Much like early TV was just videos of people putting on plays, and it took time for people to realize the potential—but once they did, they didn’t make plays that were better for TV, they did something that actually used the medium well. And what using AI well would mean, in context of business implications is cutting out human delays, inputs, and required oversight. Which is worrying for several reasons!
I mostly agree, but “the reference class of gamers who put forth enough effort to beat the game” is still necessarily truncated by omitting any who nonetheless failed to complete it, and is likely also omitting gamers embarrassed of how long it took them.
Meanwhile, the average human can beat the entirety of Red in just 26 hours, and with substantially less thought per hour.
I mostly agree with the post, but this number is absolutely bullshit. What you could more honestly claim, given the link, is that the average hardcore gamer that both completed the game, then input their completion time into this type of website is 26 hours. That’s an insanely different claim. In fact, I would be shocked if even 50% of people who have played a Pokemon game have completed it at all, much less doing so in under a week of playtime.
I’m not sure we’d see this starkly if people can change roles and shift between job types, but haven’t we seen firms engage in large rounds of layoffs and follow up by not hiring as many coders already over the past couple years?
Lemonade is doing something like what you describe in Insurance. I suspect other examples exist. But most market segments, even in “pure“ software, don’t revolve around only the software product, so it is slower to become obvious if better products emerge.
My understanding of the situations, speaking to people in normal firms who code, and management, is that this is all about theory of constraints. As a simplified example, if you previously needed 1 business analyst, one QA tester, and one programmer a day each to do a task, and the programmer’s efficiency doubles, or quintuples, the impact on output is zero, because the firm isn’t set up to go much faster.
Firms need to rebuild their processes around this to take advantage, and that’s only starting to happen, and only at some firms.
This seems reasonable, though efficacy of the learning method seems unclear to me.
But:
with a heavily-reinforced constraint that the author vectors are identical for documents which have the same author
This seems wrong. To pick on myself, my peer reviewed papers, my substack, my lesswrong posts, my 1990s blog posts, and my twitter feed are all substantively different in ways that I think the author vector should capture.
There’s a critical (and interesting) question about how you generate the latent space of authors, and/or how it is inferred from the text. Did you have thoughts on how this would be done?
That is completely fair, and I was being uncharitable (which is evidently what happens when I post before I have my coffee, apologies.)
I do worry that we’re not being clear enough that we don’t have solutions for this worryingly near-term problem, and think that there’s far too little public recognition that this is a hard or even unsolvable problem.
it could be just as easily used that way once there’s a reason to worry about actual alignment of goal-directed agents
This seems to assume that we solve various Goodhart’s law and deception problems
Assuming that timelines are exogenous, I would completely agree—but they are not.
The load bearing assumption here seems to be that we won’t make unaligned superintelligent systems given current methods soon enough to think it matters.
This seems false, and at the very least should be argued explicitly.
My original claim, “the ability to ‘map’ Turing machine states to integers,” was an assertion over all possible Turing machines and their maps.
I have certainly seen that type of frustrating unwillingness to update on his part at times occur as well, but I haven’t seen indications of bad faith. (I suspect this could be because your interpretation of the phrase “bad faith” is different and far more extensive than mine.)
A few examples of being reasonable which I found looking through quickly; https://twitter.com/GaryMarcus/status/1835396298142625991 / https://x.com/GaryMarcus/status/1802039925027881390 / https://twitter.com/GaryMarcus/status/1739276513541820428 / https://x.com/GaryMarcus/status/1688210549665075201
@Veedrac—if you want concrete examples, search for both of our usernames on twitter, or more recently, on bluesky.
A counterargument is that it takes culture to build cumulative knowledge to build wealth to create cognitive tools that work well enough to do obviously impressive things. And 50,000 individuals distributed globally isn’t enough to build that culture.