20-something who decided to write about psychology, politics, and statistics.
Jensen
Other evidence I would add to the theory of the brain being a ULM is the existence of the g-factor, and the fact the general factor is one that explains the most variation in these cognitive tests. In addition—if you model human cognitive abilities as universal and specific, evolutionarily speaking it would make sense for the universal aspect to be under stronger selection than the specific domain. One exception to this could be language learning, which is important just for the sake of being able to communicate.
>Overall, I think the linked article reinforces my preexisting impression that Curtis Yarvin is a fool.
Given he was in the SMPY, I don’t think intelligence is preventing him from understanding this issue, rather he seems to have approached the issue uncritically and overconfidently. In effect, not distinguishable from a fool.
What’s the LW take on Curtis Yarvin’s article about AI safety?
The TL;DR is that Yarvin argues that an AGI system can only affect the virtual world, while humans can affect the virtual and physical world. The AGI/virtual world depends the physical world being able to provide it electricity (and potentially internet) to function, while the physical world is not nearly as dependent on the virtual world (though to an extent it is in practice, due to social systems and humans adopting these technologies). This produces a system analogous to slavery due to the power differential between the two groups/worlds.
He also argues that intelligence has diminishing returns, using the example of addition:
“A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.”
Personally I think this is a bad argument, as the AGI can overcome the constraints of human information processing speeds and energy levels, which I view as a substantial bottleneck to the success of humans.
As for his thoughts on the balance of power between humans and AGI systems, I think this would be true in the early days of “foom” but would be less relevant as the AGI becomes more embedded within the economic and political systems of the nation.
Jensen’s Shortform
No, I would agree with Logan that calling something “cringy” is mindkilly, since it instills a strong sense of defensiveness in the accused. I’m not even sure that the cringiness I felt was rooted in the fact the post seemed fake, but it was real nonetheless. For this particular post, it seems that the average lesswronger doesn’t think it seems cringy but I doubt I am alone in thinking this way.
Gwern’s site design is extremely “rationalist” to me, though I don’t see that as a bad thing. The site itself looks beautiful.
I think the interrater reliability of “cringyness” would be surprisingly high.
Sorry, this is cringy.
I would find this simply unfunny if it was the basics of black nationalist or nazi bodybuilder discourse, but lets face it, lesswrongers are not black nationalists or nazi bodybuilders. The aesthetics of an object should ideally reflect its true nature; the minimalistic and monochromatic design of this website reflects the nature of this movement well. This post, not so much.
What are you supposed to conclude with data that doesn’t accurately reflect what it is supposed to measure?
>”the same product getting cheaper”
floss costing 5$ in year 2090 and then lowering to 3$ in year 2102.
>”the same cost buying more”
laptop in year 1980 running at piss/minute costing 600$ while laptop in year 2020 running at silver/minute costing 600$.
change 1 is a productivity increase.
change 2 isn’t.
A few points:
I think it’s important to distinguish between economic growth and value. A laptop that costs 500$ in 1990 and a laptop that costs 500$ in 2020 are nowhere near close in value, but are in GDP.
You are correct about lots of barriers to growth being social/legal rather than technological (see Burja) - though in the long term jobs that can be replaced with AI will be replaced with AI, as the social need for bullshitjobhavers decreases significantly when they leave.
I don’t think it is fair to infer that computers had a small impact on GDP and especially value based on decreasing secular trends in productivity. Innovation and scientific output have been slowing as well; it is difficult to name a major economic or scientific outbreak that isn’t computational in nature. Compare this to the 1900-1970 period: cars, lasers, refrigerators, planes, plastics, nuclear tech, and penicillin were all fairly impactful inventions.
edit: oh, and if we get to AGI, that will definitely be a productivity increase. The literature suggesting national IQs cause differences in economic growth is fairly solid; I have no reason to believe this shouldn’t apply to AI with the exception of an AGIpocalypse.
The ironic thing about the push for the 6 hour work week is that most white collar workers aren’t even productive for half the time they are on the job. In addition, fixed effects models controlling for income and occupational status rises suggest that men may actually feel less happy working under 8 hours a week [1].
[1] - https://easthunter.substack.com/p/happiness-1