Really? That’s your argument? Do you really think people wouldn’t have small talk topics or understand authority figures or learn anything without these classes? If after reading this, you still think those courses are essential to learning those skills, let alone teach them efficiently, I eagerly await your reply to this.
I didn’t say that she learned nothing of value, I said that the marginal value of reading additional books at this point is close to zero. The first few books were probably different. Also, one incompetent professor isn’t close to the only reason I have for opposing affirmative action. Finally, I didn’t simply “not think of them as different”, I didn’t even have the mindset to understand the argument that he was when I first heard it, which is clear evidence against the claim that “every white person has internalized racism against black people and these are the stages of racism awareness”. One paragraph is not my entire mindset.
You missed my main reason for avoiding spoilers. It’s not because something is intended a certain way or that I think it would train rationality better to not do something, it’s because doing things myself is way more fun than having things done for me. I found trying to figure out how to solve a rubix cube myself to be way more fun than being told would have been. (Or figuring out the villain’s plot before the monologue, or whatever).
The difference between the chess and go skill patterns is because chess and go have vastly different algorithms.
The chess skill changed linearly because the algorithm is easy to compute by point values (finding the paths with the most pieces compared to the opponent, or positions leading to this), and modern algorithms aren’t much different from early ones. In other words, taking the enemy’s queen without similar cost is always extremely good (if the computer can look far enough ahead to check for traps), and computers are mainly limited by how many turns ahead their processor can look.
Go, however, is far more subtle, with pieces being drastically different in value based on what occurs without them, and a few pieces in the wrong spot can lead to the loss of a quarter of the board 40 turns later in a subtle way, such as providing a ko threat or dead shape. Counting the territory in 5 turns is near-useless without considering how each piece interacts with all others. In this case, the limiting factor is not processing power but algorithm design, and the rapid gain happened because of insights in algorithms.
What determines whether AI development will be sudden or gradual will be which type of limiting factor it has. Self-driving cars had a big jump then a stall because it is an algorithm difficulty. Computer graphics improved gradually because it was a processing power difficulty. Sentient AI could be like one of these, or have a different limiter I haven’t thought of, but whatever the limiting factor is would determine the rate for each thing.
I’m not an expert in AI, but am very good at chess and go.
Thanks for your well explained response! I’ll keep your reasons in mind for future posts.
I suggest reading the “Fun theory” sequence.
Most lies are bad, but there are circumstances where lying is necessary and does not make truth the enemy, when telling the truth causes immediate bad action.
When people in Germany were sheltering people during the holocaust, and a Nazi official asked if they were hiding anyone, the correct response was “no” even though it was a lie. When someone doesn’t believe in a religion or is gay or something, but they would be cast out of the home or “honor-killed” if parents found out, they should lie until they have a way to escape.
This post isn’t wrong, but I doubt anyone today (except a few crazy people) disagree with it. Do you think there is a significant risk of a large-scale human eugenics program happening before direct genetic modification becomes cheap enough to make this irrelevent?
Hmm, good thought.
One problem I see with the experimental setup is that it is impossible to remove the loss of energy from the system. For example, no mirrors are perfectly smooth, perfectly reflective, and perfectly aligned. Even if coherence is formed, it would still be at the cost of heat, so it would be unclear if entropy actually decreases.
Not quite, since although it never went that far, there was a legitimate concern that I could get killed. Also, l needed to show a specific example of a bully taking the extra effort to do extra harm, and giving a real example would be, well, problematic.
What if it’s just regression to the mean? Maybe the main problem wasn’t that late Rome was unusally bad, but that Rome at it’s peak was anomalously successful, and this didn’t last because technology and culture just wasn’t able to sustain an anomaly at the time?
Sorry, that was the biggest I could find
The problem is that crushing poverty is one source of misery, but not the only source of misery. This implies that very poor countries would have clear benefits from industrializing, but things like cultural pressures and instability also have an effect, so when resources are common other factors dominate and so additional industry doesn’t affect things much.
There is a fourth option: the “safe” set of values can be misaligned with humans’ actual values. Some values that humans have are either not listed in the “safe” set of values, or something in the safe set of values would not quite align with what it was trying to represent.
As a specific example, consider how a human might have defined values a few centuries ago.”Hmm, what value system should we build our society on? Aha! The seven heavenly virtues! Every utopian society must encourage chastity, temperance, charity, diligence, patience, kindness, and humility!”. Then, later, someone tries to put happiness somewhere in the list. However, since this was not put into the constrained optimization function it becomes a challenge to optimize for it.
This is NOT something that would only happen in the past. If an AI based it’s values today on what the majority agrees is a good idea, things like marijuana would be banned and survival would be replaced by “security” or something else slightly wrong.