I only saw this now—huge kudos to Kontsevich for being so clear-eyed about this.
Amalthea
Basically I’d bet capable people are still around, only that the circumstances don’t allow them to rise to the top for whatever reason.
My guess would be that nowadays many people who could bring a fresh perspective, or simply high-caliber original thinking, get either selected out/drowned out or are pushed through social and financial incentives to align there thinking towards more “mainstream” views.
I wasn’t quite happy with the OPs phrasing it in terms of dom/sub dynamics, but couldn’t quite put a finger on it—I think your point that it’s more about social expectations and connections in general captures it pretty well!
Your dig against pick-up artists as it’s stated doesn’t seem to amount to much more than “these guys feel icky”, which is most likely just reflecting it being low status. (There’s also separately a bunch of toxic behavior related to the pick-up mindset one could rightfully criticize.)
I did not know this! And it’s quite an update for me regarding Mochizuki’s credibility on the matter.
This seems like nonsense. If there’s any way to formalize what Mochizuki claims, he could and should do this to achieve what might be the greatest intellectual upset in history. On the other hand, he’s likely just wrong about something and his proof wouldn’t go through, so there’s no use in trying to settle this with a proof assistant.
I’m always wondering whether there’s something going on here, where—by definition—we can rationally understand how high-value a utopia would be, but since we can’t really tell for sure where things will end up, we may be assigning a way to high intuitive probability to it.
It feels like your implicitly framing is that one should welcome technological progress and be active about adapting to it, but I’m lacking the perspective that one (sometimes) should be active about shaping the course of progress itself.
Also, most of the time people who are seriously discussing a matter likely don’t talk about whether a technology is good or bad as a whole, but refer to the way it’s currently being realized in the world, so that seems like a strawman.
If they changed their mind not immediately after the election, and signaled credibly that they did so for concrete reasons after having looked into/engaged with the issue, then this’d probably be fine in some cases? (Ideally, if they’re right, they can convince you to change your mind too)
OK, so here we should give some credit to Eliezer for having thought this through and who has been making a big deal about this precise thing (although partly through his recent fanfic that he keeps alluding to, so maybe that doesn’t really count).
I tend to find him a bit annoying on these things because surely people wouldn’t seriously try to pin substantial hopes on these kind of ideas, but what do I know.
My personal take is that this is already an area that people at large are ca. appropriately worried about. It can also lead you into politically polarized territory and people may reasonably prefer to avoid that unless there’s a good reason.
I’m not super happy with my phrasing, and Ben’s “glory” mentioned in a reply indeed seems to capture it better.
The point you make about theoretical research agrees with what I’m pointing at—whether you perceive a problem as interesting or not is often related to the social context and potential payoff.
What I’m specifically suggesting that if you took this factor out of ML, it wouldn’t be much more interesting than many other fields with a similar balance of empirical and theoretical components.
What you’re pointing at applies if AI merely makes most work obsolete without significantly disturbing the social order otherwise, but you’re not considering (also historically common) replacement/displacement scenarios. It is clearly bad from my perspective if (e.g.) either:
1) Controllable strong AI gets used to takeover the world and in time replace the human population by the dictators offspring.
2) Humans get displaced by AIs.
In either case, the surviving parties may well look back on the current state of affairs and consider their world much improved, but it’s likely we wouldn’t on reflection.
From my perspective, the interesting parts are “getting computers to think and do stuff” and getting exciting results, which hinges on the possible payoff rather than whether the problem itself is technically interesting or not. As such, the problems seem to be a mix of empirical research and math, maybe with some inspiration from neuroscience, and it seems unlikely to me that they’re intellectually substantially different from other fields with a similar profile. (I’m not a professional AI researcher, so maybe the substance of the problems changes once you reach a high enough level that I can’t fathom.)
Aren’t these basically mostly “works on capabilities because of status + power”?
(E.g. if you only care about challenging technical problems, you’ll just go do math)
Fwiw, I read a number of Smil’s books, and it was my impression that he strongly expressed that same opinion about sigmoids, and the mentioned example might have been precisely an attempt to illustrate how you can show everything with fitting the right sigmoid. (But it’s been awhile since I read his books)
I’m actually not sure what this refers to. E.g. when Boaz Barak’s posts spark some discussions it seems pretty civil and centered on the issue. The main disagreements don’t necessarily get resolved, but at least they were identified, and I didn’t get any serious signs of tribalism.
But maybe this is me skipping over the offending comments (I tend to ignore things that don’t feel intellectually interesting), or this is not an example of the dynamic that you refer to?
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
This article said that it involved quite a bit of direct personal pressure and that his reversal was pivotal (but it may not be very accurate).