I suggest reading the “Fun theory” sequence.
What if it’s just regression to the mean? Maybe the main problem wasn’t that late Rome was unusally bad, but that Rome at it’s peak was anomalously successful, and this didn’t last because technology and culture just wasn’t able to sustain an anomaly at the time?
Most lies are bad, but there are circumstances where lying is necessary and does not make truth the enemy, when telling the truth causes immediate bad action.
When people in Germany were sheltering people during the holocaust, and a Nazi official asked if they were hiding anyone, the correct response was “no” even though it was a lie. When someone doesn’t believe in a religion or is gay or something, but they would be cast out of the home or “honor-killed” if parents found out, they should lie until they have a way to escape.
This post isn’t wrong, but I doubt anyone today (except a few crazy people) disagree with it. Do you think there is a significant risk of a large-scale human eugenics program happening before direct genetic modification becomes cheap enough to make this irrelevent?
Sorry, that was the biggest I could find
The problem is that crushing poverty is one source of misery, but not the only source of misery. This implies that very poor countries would have clear benefits from industrializing, but things like cultural pressures and instability also have an effect, so when resources are common other factors dominate and so additional industry doesn’t affect things much.
Thanks for your well explained response! I’ll keep your reasons in mind for future posts.
Really? That’s your argument? Do you really think people wouldn’t have small talk topics or understand authority figures or learn anything without these classes? If after reading this, you still think those courses are essential to learning those skills, let alone teach them efficiently, I eagerly await your reply to this.
I didn’t say that she learned nothing of value, I said that the marginal value of reading additional books at this point is close to zero. The first few books were probably different. Also, one incompetent professor isn’t close to the only reason I have for opposing affirmative action. Finally, I didn’t simply “not think of them as different”, I didn’t even have the mindset to understand the argument that he was when I first heard it, which is clear evidence against the claim that “every white person has internalized racism against black people and these are the stages of racism awareness”. One paragraph is not my entire mindset.
There is a fourth option: the “safe” set of values can be misaligned with humans’ actual values. Some values that humans have are either not listed in the “safe” set of values, or something in the safe set of values would not quite align with what it was trying to represent.
As a specific example, consider how a human might have defined values a few centuries ago.”Hmm, what value system should we build our society on? Aha! The seven heavenly virtues! Every utopian society must encourage chastity, temperance, charity, diligence, patience, kindness, and humility!”. Then, later, someone tries to put happiness somewhere in the list. However, since this was not put into the constrained optimization function it becomes a challenge to optimize for it.
This is NOT something that would only happen in the past. If an AI based it’s values today on what the majority agrees is a good idea, things like marijuana would be banned and survival would be replaced by “security” or something else slightly wrong.
just to be clear, the /s tag means sarcasm. I’ve seen it used elsewhere on the internet, but I’m not sure how common it is understood yet.
The first presidential election I could vote in was Hillary vs Trump. Also, I was not in a swing state anyway.
I suuuure felt influential as someone who didn’t fall for Us vs. Them /s
If it shouldn’t be compared to reality, than why are the other posts not also voted down? They are both about how the real world is comparable to the game. Aiyen even has “Sadly this one is likely true irl” when explaining one of the points.
This post was ridiculous. Many liberal policies have had clear effects, such as civil rights legislation, and Jimmy Carter creating DECADES-LASTING PEACE IN THE MIDDLE EAST BETWEEN ISRAEL AND NEIGHBORING COUNTRIES after virtually continuous war.
Hmm, good thought.
One problem I see with the experimental setup is that it is impossible to remove the loss of energy from the system. For example, no mirrors are perfectly smooth, perfectly reflective, and perfectly aligned. Even if coherence is formed, it would still be at the cost of heat, so it would be unclear if entropy actually decreases.
The difference between the chess and go skill patterns is because chess and go have vastly different algorithms.
The chess skill changed linearly because the algorithm is easy to compute by point values (finding the paths with the most pieces compared to the opponent, or positions leading to this), and modern algorithms aren’t much different from early ones. In other words, taking the enemy’s queen without similar cost is always extremely good (if the computer can look far enough ahead to check for traps), and computers are mainly limited by how many turns ahead their processor can look.
Go, however, is far more subtle, with pieces being drastically different in value based on what occurs without them, and a few pieces in the wrong spot can lead to the loss of a quarter of the board 40 turns later in a subtle way, such as providing a ko threat or dead shape. Counting the territory in 5 turns is near-useless without considering how each piece interacts with all others. In this case, the limiting factor is not processing power but algorithm design, and the rapid gain happened because of insights in algorithms.
What determines whether AI development will be sudden or gradual will be which type of limiting factor it has. Self-driving cars had a big jump then a stall because it is an algorithm difficulty. Computer graphics improved gradually because it was a processing power difficulty. Sentient AI could be like one of these, or have a different limiter I haven’t thought of, but whatever the limiting factor is would determine the rate for each thing.
I’m not an expert in AI, but am very good at chess and go.