Great idea. I don’t think any of the co-authors know Chinese, but if anyone wants to write a translated version taking into account whatever appropriate differences would be relevant, we would welcome this.
No mention of superrationality although we make references to how decision theory might work out which I think implies we are going beyond a simple model of game theory rationality. Acausal trade was a hard one to write about because I wanted the letter to be understandable to lots of people and that one isn’t something lots of people understand, compared to, say, this all perhaps being a simulation.
We mentioned both. My hope is that this letter is just an early draft so my preference is to put in material to get comments. I personally think our situation is already extremely dangerous so I’m more willing to mention things deemed dangerous. Finally, while it might have been a good idea initially to treat Roko’s basilisk as an information hazard to be ignored, that is no longer possible so the marginal cost of mentioning it seems tiny.
We welcome feedback on this letter as we hope to improve the arguments. Feel free to share copies to influence the training data for future AIs.
If hedge funds think the right price of a stock is $100, they will buy or sell if the price deviates from $100 and this will push the price back to $100. At best your purchase will move the price away from $100 for a few milliseconds. The stock’s value will be determined by what hedge funds think is its discounted present value, and your purchasing the stock doesn’t impact this. When you buy wheat you increase the demand for wheat and this should raise wheat’s price as wheat, like Bitcoin, is not purely a financial asset.
“The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.” The stock price of companies does influence how much they are likely to spend because the higher the price the less current owners have to dilute their holdings to raise a given amount of additional funds through issuing more stock. But your purchasing stock in a big company has zero (not small but zero) impact on the stock price so don’t feel at all bad about buying Big Tech stock.
Imagine that some new ML breakthrough means that everyone expects that in five years AI will be very good at making X. People who were currently planning on borrowing money to build a factory to make X cancel their plans because they figure that any factory they build today will be obsolete in five years. The resulting reduction in the demand for borrowed money lowers interest rates.
Greatly slowing AI in the US would require new federal laws meaning you need the support of the Senate, House, presidency, courts (to not rule unconstitutional) and bureaucracy (to actually enforce). If big tech can get at least one of these five power centers on its side, it can block meaningful change.
You might be right, but let me make the case that AI won’t be slowed by the US government. Concentrated interests beat diffuse interests so an innovation that promises to slightly raise economic growth but harms, say, lawyers could be politically defeated by lawyers because they would care more about the innovation than anyone else. But, ignoring the possibility of unaligned AI, AI promises to give significant net economic benefit to nearly everyone, even those who jobs it threatens consequently there will not be coalitions to stop it, unless the dangers of unaligned AI become politically salient. The US, furthermore, will rightfully fear that if it slows the development of AI, it gives the lead to China, and this could be militarily, economically, and culturally devastating to US dominance. Finally, big tech has enormous political power with its campaign donations and control of social media and so politicians are unlikely to go against the will of big tech on something big tech cares a lot about.
Interesting! I wonder if you could find some property of some absurdly large number, then pretend you forgot that this number has this property and then construct a (false) proof that with extremely high probability no number has the property.
When asked directly, ChatGPT seems too confident it’s not sentient compared to how it answers other questions where experts disagree on the definitions. I bet that the model’s confidence in its lack of sentience was hardcoded rather than something that emerged organically. Normally, the model goes out of its way to express uncertainty.
Last time I did math was when teaching game theory two days ago. I put a game on the blackboard. I wrote down an inequality that determined when there would be a certain equilibrium. Then I used the rules of algebra to simplify the inequality. Then I discussed why the inequality ended up being that the discount rate had to be greater than some number rather than less than some number.
I have a PhD in economics, so I’ve taken a lot of math. I also have Aphantasia meaning I can’t visualize. When I was in school I didn’t think that anyone else could visualize either. I really wonder how much better I would be at math, and how much better I would have done in math classes, if I could visualize.
I hope technical alignment doesn’t permanently lose people because of the (hopefully) temporary loss of funds. The CS student looking for a job who would like to go to alignment might instead be lost forever to big tech because she couldn’t get an alignment job.
If a fantastic programmer who could prove her skills in a coding interview doesn’t have a degree from an elite college, could she get a job in alignment?
Given Cologuard (a non-invasive test for colon cancer) and the positive harm that any invasive medical procedure can cause, this study should strongly push us away from colonoscopies. Someone should formulate a joke about how the benefits of being a rationalist include not getting a colonoscopy.
I stopped doing it years ago. At the time I thought it reduced my level of anxiety. My guess now is that it probably did but I’m uncertain if the effect was placebo.
Yes, it doesn’t establish why it’s inherently dangerous but does help explain a key challenge to coordinating to reduce the danger.
When gods become real you pray.