Posts mostly crossposted from my substack.
Yair Halberstadt(Yair Halberstadt)
The density will grow, and since individual construction companies don’t have an incentive to care about urban planning, just their own projects, you will get ugly neighborhoods and questionable infrastructure.
This is only true when construction companies are building small projects. If they’re building large projects, then insofar as people value infrastructure, and will pay extra for it, the construction companies will have an incentive to build infrastructure.
Currently construction companies have an incentive to buy large plots of land, and build nice neighborhoods with good infrastructure as they can charge more per unit. So if that’s not happening it’s because it’s too difficult to buy land.
In which case we need to increase land liquidity.Robin Hanson likes the idea of Harberger taxes to solve this: https://www.overcomingbias.com/2017/10/for-stability-rents.html. The idea is everyone needs to declare a selling price for their property. In order not to encourage inflated selling prices making the market illiquid, you get charged property tax based on your stated price.
An alternative might be the Georgist idea of taxing land value according to its full rent. This means that holding onto land is expensive and it’s only profitable to hold onto it if you can use it more valuably than the next guy—there’s no money to be made speculating on it. This should help both increase liquidity, and reduce land prices. I enjoyed the recent book review on this at https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty.
“Ok, how about you sign it, and then I get a different assistant to help me with my taxes?”
“That won’t work because in order to sign the agreement, I must sign and attach a copy of your tax return for this year.”
Speculating about some of the technical details:
How could AI identity work? You can’t use some hash on the AI because that would eliminate it’s ability to learn. So how could you have identity across a commitment—i.e. this A.I. will have the same signature if and only if it has not been modified to break it’s previous commitments.
To me a good definition for this is:
Get to a stage where you can write a computer program which can match the best AI at Go, where the program does no training (or equivalent) and you do no training (or equivalent) in the process of writing the software.
I.E. write a classical computer program that uses the techniques of the Neural Network based program to match it at Go.
This doesn’t seem right to me—if it takes 8 months minimum to turn over a factory to produce a new vaccine, how come there’s a reasonable (if not high) rate of production within a few weeks of each new vaccine coming on board? Had factories already started turning over for each individual vaccine just in case 8 months ago?
The Crawford Standard simply states that it’s difficult to assert something could have been invented earlier but wasn’t, it doesn’t tell you why it couldn’t have been invented.
The why could be lack of enabling technology, time taken to ramp up production, or regulation.
Given we know that regulation was the limiting factor here, there’s no reason to assume other things were also a limiting factor—i.e. we know that vaccines were produced as soon as they were approved. What’s the chance that all the other limitations (e.g. ramp up time) happened to take exactly (or close to) the same amount of time?
Sure anything’s possible—I haven’t seen any evidence that they did that, nor do I think they ever claimed they did that, nor does their behaviour match that strongly. E.g. they still haven’t authorized AstraZenica for God knows what reason, but it’s currently being produced at a very decent rate.
I disagree with point 3.
Given that we can show governance was definitely a critical bottleneck (production proceeded as soon as governance allowed), why is the burden of proof on us to show that no other bottlenecks happened to also be critical. My prior would be that it is unlikely that 2 bottlenecks happened to be about the same time. Not crazy unlikely, but in the 20/30% range.
In other words, bad governance was definitely an issue and must be fixed. Maybe there’s some other issues that also have to be fixed, in which case, sure, bring me some evidence of them and we’ll work on solving them.
An alternative hypothesis is this:
Producers can make enough vaccine for early trials, but don’t manage to scale up to produce millions of doses until December. The FDA takes the opportunity to gather data on safety and efficacy until that time. However, they don’t want to look like they’re issuing emergency approval just because the vaccine’s ready to go. They want it to look like it’s because they have reached satisfaction with the safety and efficacy data. So that’s the line they put out to the public. Yet in reality, they give approval right around the time when producers, distributors, and vaccination sites are ready to go.
If that was correct we’d expect that FDA approval process tends to vary by how quickly manufacturing capacity can be ramped up. The key criticism I have of the FDA is not allowing challenge trials. If you are correct and that was intentional to give time for capacity to ramp up, we would expect to see drugs which can be ramped up more quickly use challenge trials—but that’s not something we’ve seen. If anything this was a ridiculously fast approval, for a process that usually takes years.
Furthermore I find it highly unlikely that producers would ramp up production just as quickly before results of trials are in as afterwards. So by not allowing challenge trials, less was invested in ramping up, so the ramp up itself (if it was a critical bottleneck) would have taken longer.
and we can more obviously see how politicians and manufacturers would both stand to benefit from an efficient rather than an inefficient process
The overwhelming evidence I’ve ever seen is that politicians and government orgs are highly inneficient. My prior on them being efficient here is extremely low.
This seems relevant: https://www.nature.com/articles/nature10533
approximately 25% of the NMR genome was represented by transposon-derived repeats, which is lower than in other mammals (40% in human, 37% in mouse, and 35% in rat genomes)
However it’s just a 1⁄3 or so reduction compared to similar mammals, so on its own that doesn’t explain much. But it suggests a possible lead.
That was meant to be the chance of P2, not P1. Fixed now, thanks!
It was a soft roll with a pretty soft crust. Personally I eat pretty much everything, crust included, but I still have this childhood association of crust as the not nice part of the bread even though I personally don’t mind it (I would still say I prefer the inner parts of bread over the crust though).
Thanks for your suggestions. Do you see precommitment as different to my Discouragement? If so where would the o be apply but not the other?
Great point, will do.
I think certainly all of these may have weak effects, but I think it’s also clear that none of them significantly influence sentencing in most criminal justice systems.
Without getting into the details of the paper, this seems to be contradicted by evidence from people.
Humans are clearly generally intelligent, and out of anyone else’s control.
Human variability is obviously only a tiny fraction of the possible range of intelligences, given how closely related we all are both genetically and environmentally. Yet human goals have huge ranges, including plenty that include X-risk.
E.g.
Kill all inferior races (Hitler)
Solve Fermat’s last theorem (Andrew Wiles)
Enter Nirvana (A budhist)
Win the lottery at any cost (Danyal Hussein)
So I would be deeply suspicious of any claim that a general intelligence would be limited in the range of its potential goals. I would similarly be deeply suspicious of any claim that the goals wouldn’t be stable, or at least stable enough to carry out—Andrew Wiles, Hitler, and the Budhist are all capable of carrying out their very different goals over long time periods. I would want to understand why the argument doesn’t apply to them, but does to artificial intelligences.
They’re also working with raw RGB input here too.
Whilst I mostly agree, I think this is overstated.
For example at Google, whilst a lot of the technical decisions are made lower down, the strategic decisions about which markets to really focus on are made at much higher levels. Around a quarter of Googles workforce is in cloud—an area where they’re still losing money! A decision to make that kind of investment in such a competitive market is driven by grand master plans at the C level, not by organic decision making further down.
At the scale of the startup, even the most fantastic product wont make money if it doesn’t have a suitable target audience. The job of the high level employees at the startup is to identify which target audience makes the most economic sense to target, and tell the lower level employees to focus on the needs of those customers over the needs of others. They certainly shouldn’t micromanage how this is done, but it’s important that everyone on the company is clear on what the strategic focus is, and what’s peripheral.
I generally agree with the sentiment, but in some cases I’ve found asking people for help is more effective since
a) they can give you specific assistance relative to your situation.
b) you can ask them follow up questions if you get stuck.
For example I live in an area where the developer built 50 identical houses. I’d much rather ask the neighbors how they did a DIY project than google, because I know their experience is directly relevant to me.
That’s an interesting point I hadn’t considered at all—malicious destruction. The obvious answer is to “make that illegal”, which is a bit vague, but works well enough for things like insider trading.
An alternative would be to make a bid last a minimum length of time. So if I offer a ridiculous temporary price for the factory in order to destroy it, I’m going to have to pay that for an entire year, which will in most cases stop that being worth it.I definitely need to think about that more!
Turns out it’s the same in the UK. That’s embarrassing! However I was just as much talking about political charities which aren’t technically a political party.
I think I agree with this insofar as political charities tend to work by disseminating the strongest argument for their case, and letting the correctest side win out. I think in practice that’s not what they’re doing—it’s more about how they can use the political system to achieve their aims, at which point I think it’s back to a prisoners dilemma.
As I said, I think the thing to do is look at where the money was actually spent. If it was spent protecting a unicorn conservation area, I’m pretty certain destroying a unicorn conservation area would not be a valid charity.
However I think you make some great points! Definitely have to think about them.