When I’ve previously considered human extinction caused by nuclear wars, I’ve known that the immediate blasts wouldn’t kill everyone. However, what are the effects of a lower overall population with fewer habitable areas and less access to resources? That’s doubly true since the areas that will have more people survive will almost definitionally be developing countries that are now suddenly cutoff from imports. I believe that humans as a species would likely survive, but I also suspect that it would be the end of modern civilization. Adding to that, I’ve seen hypotheses before that the remaining resources left underground but easily accessibly by non-modern technology would not be enough to “reboot” civilization, especially fossil fuels. Overall, despite the very likely non-extinction of the human species, it would much more likely be an extinction of the human race as a space-faring species.
gbear605
- 9 Jan 2022 22:22 UTC; 16 points) 's comment on Nuclear war is unlikely to cause human extinction by (
Note that elite colleges are not revenue or profit maximizers. For the normal laws of economics to apply, a school needs to be either in a competitive field where there is a risk of the school going bankrupt, or there have to be people at the school who are acting as if that is the case. But at elite colleges like Princeton or Harvard, neither of those are the case.
This happens because although there is infinite demand to be a student, the limiting factor is not price. In a normal market, the price would rises until demand equals price. At Princeton or Harvard, this price would probably be something like a million dollars per student-year or something like that. But at elite colleges, there is a different algorithm used to allocate slots: the admissions office. The admissions offices are Harvard and Princeton are not told how much the applicants can afford, breaking the loop. Not only that, they explicitly allocate by looking for applicants who can’t traditionally afford it even at the average price.
You may think that loans explain this, letting anyone pay the highest price and then deal with the consequences later. However, Princeton and Harvard do not ask any of their students to take out loans, instead subsidizing the cost of the lower income students with alumni donations and higher income students. If you are a student whose family working wt minimum wage jobs, you will pay $0 to go, assuming you’re smart enough.
Obviously this means that richer applicants will start competing on non-monetary accomplishments, using their money as leverage, but this is only so effective. Admissions offices are well aware that this happens and take this type of signaling as a negative sign. Some of it still works, but it only works very ineffectively.
Elite colleges have also considered a system where all the slots are allocated randomly to applicants who meet a certain minimum bar. There would be competition to get over the bar, but the bar is already low enough that many many students get over it by spending $0.
Beyond all that, Princeton and Harvard have enough reputation, alumni goodwill, and saved money that they could choose to act essentially however they want and they would still have enough money to operate as normal. Because of that, they do not face any pressure to raise tuition.
Overall, modeling elite nonprofit colleges like this as rational agents in the economic system is fundamentally flawed. When demand is infinite but supply is constrained, and the allocation method is not monetary, the normal rules of economics no longer apply.
I took the survey. I feel like the questions that ask for numeric answers about the probability of AI risk should have been optional because I have very weak fews about them
Are… are you doing okay?
It would be insanely great if we periodically had judges evaluating all such requirements and regulations to see if the government had a plausible cost/benefit analysis case for why they were restricting our freedoms, the same way this judge evaluated the mask mandate. Or even better, if it happened without the need for a lawsuit.
Some founding fathers, most notably Jefferson, advocated for a mandatory repeal-and-selective-reinstate of all laws (including the constitution) every 19 years. His reasoning was that each generation should only enforce the laws that the people of that generation believe in. If the laws of the previous generation are still enforced, it’s an act of force by that past generation and not just part of the social contract of the current generation.
It makes a lot of sense looking at the world today. At this point, very few people support the TSA, but no legislator can advocate for getting rid of it without being a scapegoat for any future terrorist attacks.
Many liberal jurisdictions will keep their transportation mask mandates around for a while
At this point, it looks like the only cities still requiring it are NYC and Portland, and I wouldn’t be surprised if one of them stops it today or tomorrow. I’m not surprised—public transit authorities have been spending a lot of capital on this.
One downside to using video games to measure “intelligence” is that they often rely on skills that aren’t generally included in “intelligence”, like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they’ll perform less well on many video games than people who have good hand-eye coordination.
A related problem is that video games in general have a large element of a “shared language”, where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that are certainly more intelligent than I am, but who are less able when playing a new video game, because their parents wouldn’t let them play video games growing up (or, they’re older and didn’t grow up with video games at all).
I like the idea of using a different tool to measure “intelligence”, if you must measure “intelligence”, but I’m not sure that video games are the right one.
Based on what’s been said in this thread, donating more money to MIRI has precisely zero impact on whether they achieve their goals, so why continue to donate to them?
Another effect of altitude is the atmosphere, and there is evidence that the active ingredient in altitude decreasing weight is the thin atmosphere. For example, “One study on rats found that they ate 58% less one day after being transported to Pike’s Peak, and were still eating 16% less per day two weeks afterwards.” The immediate effect strongly suggests that it’s not caused by anything in the water source.
one might speculate that “more classes to reduce collisions” could be part of the historical explanation for grammatical gender
Linguists are actually quite certain that this is the case. There are many languages that have more than two noun classes though, using other features or arbitrary classification that simply needs to be memorized. One common division is also animate/inanimate, and obviously for that (and many other divides), all people are in one category.
Very good post, highly educational, exactly what I love to see on LessWrong.
Regarding the content of the post, I wonder if one helpful attribute of the system is that it makes the proposals concrete. You’re not arguing against “basic income”; you’re arguing against “the current proposal of basic income.”
I’m more familiar with DALL-E 2 than with Midjourney, so I’ll assume that they have the same shortcomings. If not, feel free to ignore this. It seems like there are still some crucial details that cause problems with AI art that will prevent it from being used for many types of art that will probably soon be fixed, and that’s why I would say “on the cusp” rather than “it’s already here”. I think the biggest issue for your example with Magic cards, there’s a certain level of art style consistency between the cards in a set that is necessary. From my experience with DALL-E, that consistency isn’t possible yet. You’ll create one art piece with a prompt, but then edit the prompt slightly and it will have a rather different style. See, for example, Scott Alexander’s attempt at making stained glass: https://astralcodexten.substack.com/p/a-guide-to-asking-robots-to-design I’m curious if you tried making a set of Magic cards (even, say, ten cards) and then asked other people into Magic to decide which ten are better, how many would choose the existing set. I would bet that they would choose the existing set because of the style consistency.
Beyond that, like you said there are some places where the AIs are just not there yet. Images with text is one, like you mentioned. Another one that seems like it would be a big problem for a Magic set is human faces, which DALL-E is notoriously bad at. Worse, it’s bad at it in ways that are rather obvious to viewers.
Both of these issues seem likely to be solved soon, but they’re not here quite yet. My use of DALL-E so far would still incline me towards paying a real artist.
I think you’re totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.
Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element—the AI just needs to be good at guessing what requirements the product team and customers want and then asking questions and running tests to further divine how the product should work. If specifying a piece of software in English is a nightmare, then your boss’s job is already a nightmare, since that’s what they do. The key is that they can give a specification, answer questions about the specification, and review implementations of that specification along the way, and those are all things that an AI could do.
I’m already an intelligence that takes in English specifications and produces code, and there’s no fundamental reason that my intelligence can’t be replaced by an artificial one.
No matter what words are used to describe it, at some point your decision algorithm needs to categorize by cause in order to compute the correct treatment: for example, to give antibiotics to the patients with bacterial diseases and antivirals to the patients with viral diseases. If the authoritative body of professional psychiatrists has a “philosophical commitment” against this, that means we don’t have a science of psychiatry.
This is overstating your evidence. Categorizing by cause in order to compute the correct treatment is only helpful (in treatment) if treatments differ by cause. To some extent, that’s definitely true. If someone experiences sadness, you want to treat a short term sadness caused by an event (eg. a loved one dying) separately from a long term sadness caused by a hormone imbalance (eg. generic long term depression). The APA does distinguish these—according to the DMV, you don’t diagnose sadness as depression if it is short term and caused by a significant event. However, I’m not convinced that it applies for all issues.
Imagine Sisyphus whose job is to keep a boulder balanced at the top of a flat-topped hill. If a wind came and pushed the ball off the peak, he must push the boulder back up. If a god comes and knocks the boulder off, he must push the boulder back up. The treatment is the same no matter the cause. Even in medicine, a broken bone is treated the same whether it is caused by falling out of a tree or getting hit by a rock. If I’m a doctor and I know that you broke your bone, knowing the cause isn’t helpful for me to resolve it. Categorizing by cause is only useful to the degree that cause informs treatment more than symptoms do.
The key is to be able to distinguish illnesses by the relevant factors. When the APA decided to not recognize developmental trauma disorder, it was because they thought that knowing that a disorder was caused specifically by childhood trauma is not the primary piece of knowledge needed to help a patient. I’m admittedly not a psychiatrist, but that sounds very plausible to me.
- 4 Jan 2024 22:22 UTC; 25 points) 's comment on If Clarity Seems Like Death to Them by (
Notably, the 538 prediction doesn’t include a number of outside factors, primarily around mail-in ballots and voter suppression. 538 has already talked about the problems with mail-in ballots being rejected, and there are also concerns about not having all of the ballots counted before the cut-off point where they have to finish counting. Republicans have also made it harder for Democrat-leaning bases to vote. These are factors that will hurt Biden more than Trump. All those links are to 538, and there are other articles on the site about those same issues. If you believe in 538′s model, you should probably also believe in their articles that indicate that these outside factors will be important. If you don’t believe in the articles, then why do you believe in their model?
Either way, this is not a clear case where the market is wrong.
I’m planning on being there.
Also of note, December 9th is Smallpox Eradication Day!
That’s only true if the probability is a continuous function—perhaps the probability instantaneously went from below 28% to above 28%.
I generally agree with the idea—a range prediction is much better than a binary prediction—but a normal prediction is not necessarily the best. It’s simple and easy to calculate, which is great, but it doesn’t account for edge cases.
Suppose you made the prediction about Biden a couple months before the election, and then he died. If he had been taken off of the ballot, he would have received zero votes, and even if he had been left on “Biden” would have received much fewer votes. Under your normal model, the chance of either of those happening is essentially zero, but there was probably a 1-5% chance of it happening. You can adjust for this by adding multiple normal curves and giving different weights to each curve, though I’m not sure how to do the scoring with this.
It also doesn’t work well for exponential behavior. For COVID cases in a given period, a few days difference in changing behavior could alter the number of deaths by a factor of 2 or more. That can be easily rectified though by putting your predictions in log form, but you have to remember to do that.
Overall though, normal predictions work well for most predictions, and we’d be better off using them!
I often realize that I’ve had a headache for a while and had not noticed it. It has real effects—I’m feeling grumpy, I’m not being productive—but it’s been filtered out before my conscious brain noticed it. I think it’s unreasonable to say that I didn’t have a headache, just because my conscious brain didn’t notice it, when the unconscious parts of my brain very much did notice it.
After a split-brain surgery, patients can experience someone on one side of their body and not notice it with the portion of the brain that is controlling speaking, that is, the portion that seems conscious, but the other portion of the brain still experiences the sensation and reacts to it in a way that can seem inexplicable to the conscious portion of the brain (though the conscious brain will try to make up some sort of explanation for it).
The brain is not unitary, and it is so un-unitary that it seems like a mistake to even act as if subjective experience is a single reality.
“The Tails Coming Apart As Metaphor For Life” is a classic Slate Star Codex post about it, based on the 2014 LessWrong post “Why the tails come apart”. Both of them use the phrasing “tails coming apart” to refer to the bias, since in the graph it seems like there are two separate “tails” of people while they are both a subset of the larger circular group of people.
Interestingly, neither posts refer to the term “collider bias”, but they definitely all are talking about the same concept.
The GPU company increase is notable because the reason Nvidia and AMD have done well has little to do with AI. It’s almost entirely about crypto, with some portion about video gaming. So while you would have done well if you had invested in GPU companies in 2015 because of AI, your results wouldn’t have actually been causally connected with your reasoning. If you take out Nvidia and AMD then your results are not nearly as much better compared to SPY. And I’m not really convinced that most of the rest of the tech increase has much to do with AI either, perhaps other than Tesla, although their increase seems more akin to Gamestop than to a company more based on fundamentals.
Of course, you could also have done better than the economy than by just investing in tech stocks, but that’s not nearly as much of an exciting conclusion (though still a bit of an exciting one).
However, that’s missing that the EMH is fundamentally based on risk—it’s easy to get better returns than the market, even over a five year time period, by investing leveraged. But then if the market goes down, it’s easy to lose everything. I haven’t calculated the numbers, but I suspect that someone who invested leveraged from 2015 to 2020 would have been looking great in January 2020 and then be bankrupt in April 2020. Tech has a different risk model than investing leveraged, but it definitely is higher risk than the overall US economy..
You can beat the market by investing in a high risk market, but that’s literally what the EMH tells you, so it’s a boring conclusion.