I think I agree with your comment except for the “but.” AFAICT it doesn’t contradict mine? In your parenthetical scenario, #3 also does not hold—the CDT agent has no negotiating power against the tit-for-tat bot.
AnthonyC
I think that’s implicitly covered under #3. The ability to alter outcomes of future interactions is a form of negotiating power.
This seems like a wrong calculation on several counts:
It makes me very sad that these problems were not immediately obvious to a group of “consultants, data analysts and managers.” OTOH, maybe they weren’t asked? Or else maybe there’s just even more skill issues in those fields than I realized.
I drive a Sierra 2500 which has a turning radius of ~53′. It really does change how (and where) you have to drive.
In any case I agree something like this should exist.
The correct amount of time and effort to devote to the meta-level is not 100% (you don’t do anything useful), and not 0% (you don’t know how to do anything well). Somewhere in the middle is the optimal amount, and that amount will differ between people for all sorts of reasons. What do you think the optimal amount is for you, and why? That would essentially remove the problem this piece talks about from the piece itself, by tying the thinking back to a real world problem you’re trying to solve.
I read that line as Zvi talking about privately owned self-driving cars, not just robotaxis. Otherwise yeah it’s very similar.
Edit to add that the first Kelsey Piper quote is about feeling bad about waiting three weeks to go public with an unpopular judgment call. Meanwhile, I think we can all comfortably look at false things the mainstream authorities stuck with for years, and some they still haven’t acknowledged.
I have some very mixed feelings about this post. On the one hand, I get exactly where you’re coming from. On the other, I think there are genuinely important second order effects to consider.
Basically: if a public intellectual consistently tries to tell people their true-but-difficult-or-unpopular opinions, one (IMO likely) outcome is that over time, they lose (much of) their audience, and cease to be a public intellectual. But if they never tell the truth about their opinions, then their status as a public intellectual isn’t doing anyone any good.
The other side of this from my POV is that successfully becoming a public intellectual involves building habits of thought and communication that can make it difficult to notice when the moment comes to really put your cards down on the table and tell the honest but hard truth. I don’t follow Ball closely enough, but Kelsey Piper and Will MacAskill have, in my opinion, done amazingly well on this front overall.
I think the SSC post on Kolmogorov Complicity and the Scott Aaronson post it builds off of capture versions of a similar problem, where putting yourself in a position to help when the critical moment comes relies on otherwise going along with a sometimes unfortunate epistemic context.
Evolution Does Not Have Goals
True and important, but if anything I think the importance in this particular community is often overstated rather than unappreciated. I suspect the analogy itself is downstream of a flaw in human languages, which are very agent-centric in their grammatical assumptions. They didn’t evolve to describe impersonal forces like evolution, and trying to do so without such analogies is often very cumbersome in ways that obfuscate the reality more than they enlighten.
Evolution Does Not Produce Individual Brains
A lot of good points in this section as well. To the “Who cares?” question, the answer is, “We do, until and unless we know how to use other methods that do sufficiently reliably encode the goals we (should) care about into the AIs we create.”
As a minimum answer to that question in the world where it turns out we have nothing more valuable to do with all the energy and matter, consider if step one were “Store all excess emitted energy indefinitely,” and if step two were “Engage in some form of stellar engineering to extend the sun’s lifetime and slow the rate of emission to a stream of usable scale.” Step three (also plausibly started in parallel) would be having autonomous systems do the same for the rest of the reachable stars and galaxies in the universe, and then just wait until we need or want it. No need for descendants unless you want them—feel free to extend your own life arbitrarily far into the future, biologically or digitally or otherwise.
And yes, it might turn out that many or most of those stars and galaxies are already controlled by other civilizations, and therefore not available for our use. If so, then so be it. I hope we’re sane enough to leave them alone or become friendly in that case, otherwise there’s lots of opportunities to waste resources fighting them and/or one another
I agree with most of the arguments and most of the vision in this post, but I still think the fundamental problem we face is that no one, today, knows how to build a(n AI) system that reliably values any particular chosen thing. We’re getting better, especially in regards to moderately powerful current and near future systems that are meaningfully constrained by the power of other people and systems. But as I understand it this is still a deep, unsolved problem. In other words, when you say:
Historically, you can trace the ebb and flow of the plight of the average person by how decentralizing or centralizing the technology most essential for national power is, and how much that technology creates mutual dependencies that make it hard for the elite to defect against the masses.
I think this dynamic is much deeper than the impression I got from this post suggests.
Often, this then leads to calls for centralization, from Oppenheimer advocating for world government in response to the atomic bomb, to Nick Bostrom’s proposal that comprehensive surveillance and totalitarian world government might be required to prevent existential risk from destructive future technologies.
Was Oppenheimer wrong? AFAICT we did, in fact, build a (fairly competent by human standards) limited form of world government for the specific goal of constraining access to nuclear weapons. The US and USSR seized overwhelming power in regards to nukes just about as soon as they were able to do so, and then conspired to deny anyone else from acquiring large amounts of that same power. In the process they altered (and slowed, and in some ways crippled) the potential for nuclear technology to solve civilian problems, most notably in energy. They did so for preservation of themselves and the world, so yay, but they did do it. In the process they had to waste a lot of resources that could in principle have been used to do much more valuable things, had they felt safe to do so.
Thanks! Letting us play with the assumptions is a great way to develop an intuitive sensitivity analysis.
As you note, opinions differ widely, on many axes, and while I also will like to see more people’s viewpoints and advice made explicit, there is really no path you can actually be confident in. In that kind of scenario, there’s IMO three factors to consider.
First, which predictions resonate with you, and best withstand scrutiny from you?
Second, which paths fail most gracefully? In the event you pick wrong (and in which there was a right thing to pick), what leaves you in an acceptable position anyway?
Third, by what criteria do you wish for your actions to be judged, and which paths best align with that?
I still find that WBW post series useful to send to people, 10 years after it was published. Remarkably good work, that.
I agree with you, but would point out that the vending machine project problems have, to some degree, been fixed: see https://thezvi.substack.com/p/ai-148-christmas-break?open=false#%C2%A7show-me-the-money. I personally put a lot of weight on the idea that even an actually-not-that-weak-AGI would struggle with many tasks at present due to how little of the scaffolding (aka the kinds of accommodations and training we’d do for a human) we’ve built to let it display its capabilities.
FWIW this happens all the time in both directions (the other being when a term becomes so overused as to become meaningless), and often (as, arguably, with AI today) both directions at once. My background is in materials science, and IMO this is basically what happened with terms like nanotech, metamaterials, smart materials, and 3D printing. My mental model is something like: Motte-and-bailey by people (often people not quite at the cutting edge but trying to develop a tech or product) leads to poorly researched press coverage (but I repeat myself) leads to popular disillusionment, such that the actual advances happening get quietly ignored and/or shouted down, no matter how much or how little impact they’re having. Sometimes the actual rates of technological progress and commercial adoption are quite smooth, even as the level of hype and investment and other activity shift wildly around them.
These kinds of haptic feedback devices exist and got talked about a decent amount 10-15 years ago, but mostly failed to take off for a variety of reasons (I don’t remember all the reasons, but cost, durability, and transparency were common ones). The first that comes to mind for me is Tactus Technology, which put a film over touchscreens that could dynamically form buttons as needed. I forget if that one was fluid based or electroactive polymer based, but I remember both existing. (EAPs are also used for vibration feedback and actuators, but in this case the idea is to deform them into a fixed shaped for as long as needed).
IIRC there was also a haptic feedback device company that talked about integrations with AR/VR and physics engines and physical modeling tools, so you could e.g. literally feel yourself moving around a digital workshop or other setting and move stuff around and interact with any materials present. Can’t remember the name.
I also wish someone would pick these kinds of ideas back up.
I first encountered a similar idea in Harari’s “Sapiens,” that humanity’s ability to coordinate at scale derives from our use of ritual to create shared belief in imagined order that allows us to act as if something is true, which belief thereby gives it its power. As an example, roughly the entire legal system consists of (in essence) sorcerers casting spells and telling stories that work because we all go along with them. It’s so ingrained that we find it jarring and disturbing when people call attention to it.
“Is this meant to be your shield, Ned Stark? A piece of paper?”
“John Marshall has made his decision, now let him enforce it.”
I like the extension to normal etiquette.
I think the idea of coordinates makes a very clear link between dimensionality and algebraic variables, so I can definitely see this, yes.
I am not. I am only saying that #3 is sufficient to cover all iterative interactions where one player’s actions meaningfully alter the others’ outcomes.