FWIW I normally eat dinner around 6, go to bed 5 hours later at 11pm, and eat my next meal 8.5 hours later at 7:30am; at which point “break-fast” is certainly the right word, since I haven’t eaten for 13 hours. Contrast to breakfast, which only has to last me 5 hours (until lunch at 12:30pm), and lunch which again only has to last me 5.5 hours (until 6pm).
gwd
People say that meta-analyses can weed out whatever statistical vagaries there may be from individual studies; but looking of that graph of the meta-study of saturated fat, I’m just not convinced of that at all. Like, relative risk of CVD events suddenly goes from 0.2 to 0.8 at a threshold of 9%, and then just stays there? Relative risk of stroke goes from 0.6 at 9% to 0.9 at 12% and then down to 0.5 at 13%? Does that say to you, “more saturated fat is bad”, or “there’s a statistical anomaly causing this jump”?
The “purpose” of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art.
Not only skill level, but usually physical capability level (as proxied by weight and sex) as well. As an aside, although I’m not at all knowledgeable about martial arts or MMA, it always seemed like an interesting thing to do might to use some sort of an ELO system for fighting as well: a really good lightweight might end up fighting a mediocre heavyweight, and the overall winner for a year might be the person in a given <skill, weight, sex> class that had the highest ELO. The only real reason to limit the ELO gap between contestants would be if there were a higher risk of injury, or the resulting fight were consistently just boring. But if GGP is right that a big upset isn’t unheard of, it might be worth 9 boring fights for 1 exciting upset.
I like the MVP! One comment re the idea of this becoming a larger thing in journalism, in relation to Goodhart’s Law (“Once a measure becomes a target, it ceases to be useful as a measure”):
Affecting policy and public opinion is a “target”
“Real” journalism affects both public opinion and policy, and thus is a “proxy target”
If “real” journalism started being affected by prediction markets, then prediction markets would also become a proxy target
This would destroy their usefulness as measures
For example, even now, how much of the “85% chance Russia gains territory” is pure “wisdom of crowds” placing bets based on knowledge, and how much is the Kremlin buying “Russia gains territory” shares, in an effort to convince people that things will go well for them? If the NYT and the Washington Post—and then Senators—regularly quoted prediction markets, you can bet the latter would go into overdrive.
I was chatting with a friend of mine who works in the AI space. He said that the big thing that got them to GPT-4 was the data set; which was basically the entire internet. But now that they’ve given it the entire internet, there’s no easy way for them to go further along that axis;; that the next big increase in capabilities would require a significantly different direction than “more text / more parameters / more compute”.
Thanks for these, I’ll take a look. After your challenge, I tried to think of where my impression came from. I’ve had a number of conversations with relatives on Facebook (including my aunt, who is in her 60′s) about whether GPT “knows” things; but it turns out so far I’ve only had one conversation about the potential of an AI apocalypse (with my sister, who started programming 5 years ago). So I’ll reduce confidence in my assessment re what “people on the street” think, and try to look for more information.
Re HackerNews—one of the tricky things about “taking the temperature” on a forum like that is that you only see the people who post, not the people who are only reading; and unlike here, you only see the scores for your own comments, not those of others. It seems like what I said about alignment did make some connection, based on the up-votes I got; I have no idea how many upvotes the dissenters got, so I have no idea if lots of people agreed with them, or if they were the handful of lone objectors in a sea of people who agreed with me.
Can you give a reference? A quick Google search didn’t turn anything like that up.
To me it’s an attempt at the simple, obvious strategy of telling people ~all the truth he can about a subject they care a lot about and where he and they have common interests. This doesn’t seem like an attempt to be clever or explore high-variance tails. More like an attempt to explore the obvious strategy, or to follow the obvious bits of common-sense ethics, now that lots of allegedly clever 4-dimensional chess has turned out stupid.
But it does risk giving up something. Even the average tech person on a forum like Hacker News still thinks the risk of an AI apocalypse is so remote that only a crackpot would take it seriously. Their priors regarding the idea that anyone of sense could take it seriously are so low that any mention of safety seems to them a fig-leaf excuse to monopolize control for financial gain; as believable as Putin’s claims that he’s liberating the Ukraine from Nazis. (See my recent attempt to introduce the idea here .) The average person on the street is even further away from this I think.
The risk then of giving up “optics” is that you lose whatever influence you may have had entirely; you’re labelled a crackpot and nobody takes you seriously. You also risk damaging the influence of other people who are trying to be more conservative. (NB I’m not saying this will happen, but it’s a risk you have to consider.)
For instance, personally I think the reason so few people take AI alignment seriously is that we haven’t actually seen anything all that scary yet. If there were demonstrations of GPT-4, in simulation, murdering people due to mis-alignment, then this sort of a pause would be a much easier sell. Going full-bore “international treaty to control access to GPUs” now introduces the risk that, when GPT-6 is shown to murder people due to mis-alignment, people take it less seriously, because they’ve already decided AI alignment people are all crackpots.
I think the chances of an international treaty to control GPUs at this point is basically zero. I think our best bet for actually getting people to take an AI apocalypse seriously is to demonstrate an un-aligned system harming people (hopefully only in simulation), in a way that people can immediately see could extend to destroying the whole human race if the AI were more capable. (It would also give all those AI researchers something more concrete to do: figure out how to prevent this AI from doing this sort of thing; figure out other ways to get this AI to do something destructive.) Arguing to slow down AI research for other reasons—for instance, to allow society to adapt to the changes we’ve already seen—will give people more time to develop techniques for probing (and perhaps demonstrating) catastrophic alignment failures.
Sorry—that was my first post on this forum, and I couldn’t figure out the editor. I didn’t actually click “submit”, but accidentally hit a key combo that it interpreted as “submit”.
I’ve edited it now with what I was trying to get at in the first place.
People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.
Part of the problem with these two is that whether an apocalypse happens or not often depends on whether people took the risk of it happening seriously. We absolutely, could have had a nuclear holocaust in the 70′s and 80′s; one of the reasons we didn’t is because people took it seriously and took steps to avert it.
And, of course, whether a time slice is the most important in history, in retrospect, will depend on whether you actually had an apocalypse. The 70′s would have seemed a lot more momentous if we had launched all of our nuclear warheads at each other.
For my part, my bet would be on something like:
O. Early applications of AI/AGI drastically increase human civilization’s sanity and coordination ability; enabling humanity to solve alignment, or slow down further descent into AGI, etc. (Not in principle mutex with all other answers.)
But more specifically:
P. Red-teams evaluating early AGIs demonstrate the risks of non-alignment in a very vivid way; they demonstrate, in simulation, dozens of ways in which the AGI would try to destroy humanity. This has an effect on world leaders similar to observing nuclear testing: It scares everyone into realizing the risk, and everyone stops improving AGI’s capabilities until they’ve figured out how to keep it from killing everyone.
Hey! As an Evangelical Christian whose church sends out church plants fairly regularly, I appreciated the basically sympathetic outside-in view of ourselves. Love this: “The role of a pastor is to enable Jesus to take as many shots on goal as possible.”
If I could add a bit of extra perspective:
At least in my circles of Evangelicalism, having a seminary degree is absolutely seen as a must. I’m happy to believe there are other circles where it’s not as important.
In addition to “convergent evolution”, there is a lot of explicit cross-pollination. I took a one-credit seminary course (see?) on “Philosophy of Ministry”, and the lecturer repeatedly referenced the organizational framework from the book “Barbarians to Bureaucrats”, which is explicitly about corporate lifecycle. I’ve read lots of books about church-planting and mission that clearly have influence from the corporate world, and lots of books about business that clearly have Christian influences.
Re launching with a “support team” from the sending church: I think you’re pretty close to the mark. There’s a massive amount of logistics required to run a service: set-up, tear-down, electrical, audio, music, coffee, food etc etc etc; and of course that doesn’t count things like accounting, legal, secretarial, social media, website, graphic design, and everything else needed to run a small organization. Having a team of enthusiastic people doing all that work for free is a huge help. So is, as you say, having a core of enthusiastic people listening to you preach every Sunday. Imagine the difference between standing up to preach maybe only having a dozen strangers, or maybe nobody at all, vs knowing you’re going to have a minimum of 15-20 supportive and enthusiastic listeners. And of course, just personally giving good advice and being encouraging.
To some degree the “creative destruction” thing is straight from Jesus: “If the salt loses its saltiness, how can it be made salty again? It’s good for nothing but to be thrown out and trampled underfoot.” “The axe is already at the root of the trees, and every tree that does not produce good fruit will be cut down and thrown into the fire.” He tells a story about a guy who goes around throwing seed everywhere; most don’t end up producing much fruit for various reasons, but a few do.
If there’s one weakness of the piece, it’s the sort of implication about the percentage of narcissists. You state that it’s the sort of job that would be attractive to narcissists, which is certainly true. And it’s undeniable that narcissists occasionally end up in positions of power (Mars Hill is a great example). But there’s sort of an unstated implication, therefore, that a high percentage of people (though unspecified) in church plants are narcissists, because you don’t see anything in particular preventing it.
There are several filters; the big one being that it’s just a lot of work. You’re expected to work long hours, be humble, put up with all kinds of criticism, be willing to do low-level service, etc etc. You’re going to have a hard time doing your plant without that initial “support team”, and you’re going to have a hard time finding an enthusiastic “support team” without playing the role. There are, on the whole, far easier ways to run your petty kingdom than by doing a church plant.
Which isn’t to say it doesn’t happen. From what I know, cancer-like mutations which cause unlimited cell growth happen all the time; after all, uniform cooperation of every cell in the body is an evolutionarily unstable equilibrium. But the body has mechanisms to detect and counter these. What we call cancer only occurs when a mutation has managed to evade the body’s defenses. I think a similar process has happened when a genuine narcissist’s church plant gains significant traction.