Can you taboo the words “weird” and “wrong” in that comment?
Precommitment for removal and optionality for adding.
There’s a discord for Crypto+Rationalists you may be interested in if you’re not already aware: https://discord.gg/3ZCxUt8qYw
To any high schoolers reading this: If I could send just one of the items from the above list back to myself in high school, it would be “lift weights.” Starting Strength is a good intro.
I have a potential category of questions that could fit on Metaculus and work as an “AGI fire alarm.” The questions are of the format “After an AI system achieves task x, how many years will it take for world output to double?”
Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/
I’m curious what cards people have paid to put in your deck so far. Can you share, if the buyers don’t mind?
Ralph Merkle’s Dao Democracy addresses size of preferences because constituents only “vote” by reporting their own overall happiness level. Everything else is handled by conditional prediction markets (like in futarchy) to maximize future happiness of the constituents. This means that if some issue is very important to a voter, it will have a greater impact on their reported happiness, which will have a greater impact on what proposals get passed.
For reference: section 40 of Reframing Superintelligence: Comprehensive AI Services as General Intelligence.
Has this new congruency-based approach led to less, the same, or more productivity than what you were doing before and how long have you been doing it?
Is losing weight one of your goals with this?
Like you said, since it hasn’t been studied you’re not going to find anything conclusive about it, but it may be a good idea to skip the fast once a month (i.e. 3 weeks where you do 88 hour fasts, then 1 week where you don’t fast at all).
I object to the demonstration because it’s based on the false assumption that there’s a fixed amount of value (candy, money) to be distributed and that by participating in capitalism, you’re playing a zero-sum game. Most games played in capitalism are positive-sum—you can make more candy.
Do you have a source for the 80% figure?
I agree that this is a really important concept. Two related ideas are asymmetric risk and Barbell strategies, both of which are things that Nassim Nicholas Taleb writes about a lot.
What is that formula based on? Can’t find anything from googling. I thought it may be from the OpenAI paper Scaling Laws for Neural Language Models, but can’t find it with ctrl+f.
In Steve Omohundro’s presentation on GPT-3, he compares the perplexity of some different approaches. GPT-2 scores 35.8, GPT-3 scores 20.5, and humans score 12. Sources are linked on slide 12.
People are literally looting businesses and NPR is publishing interviews supporting it. They’re not just interviewing people who support it—the interviewer also supports it. What makes you think these aren’t actual policy proposals?
They may only propose it for deep social-signalling reasons as you say, but that doesn’t mean it’s not actually a proposal. Historically, we’ve seen that people are willing to go through with mass murders.
In the Gwern quote, what does “Even the dates are more or less correct!” refer to? Which dates were predicted for what?
This was mentioned in the “Other Constraints” section of the original post:
Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you’re at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with
I’m skeptical of this being a binding constraint too. $2/page is still very cheap.