The many disagreement Karma (-7 as of now) suggest my failed joke was taken quite a bit as a serious statement, which in itself is quite interesting; maybe worth preserving as stats in a thus from now on retracted comment; we’re living in strange times!
And next year we may learn it is in fact flat ;-)
Thanks for sharing, surprising stuff!!
https://e-estonia.com/e-governance-saves-money-and-working-hours/ “Estonian public sector annual costs for IT systems are 100M Euros in upkeep and 81M Euros in investments”
If I read you correctly, the 100+81M in Estonia is for (i) the ENTIRE gvmt IT system (not just e-signatures) serving (ii) the population. Though I could not read the report in Estonian to verify. Switzerland’s is “up to 19 $bn” is specifically for e-signatures, only for within-gvmt exchanges afaik.
2. The pay of rail executives depends on short-term profits, so they’re against long-term investments.
I think that is not as obvious an explanation as it may intuitively seem:
a. a company’s profit is not equal to it’s cash-flow. Profit includes the value of the assets invested in; so a valuable investment should normally not look bad on the balance sheet, even when evaluated in the short run.
b. If there really was a clear 19% or so ROI, even if the accounting ignored a.: You’d typically expect a train company to debt-finance an overwhelming share of the electrification capex, as is common for large infrastructure projects, attenuating the importance of the cash-flow issue, and making the investment even more attractive for equity investors.
Taking the ‘China good in marginal improvements, less in breakthroughs’ story in some of these sources at face value, the critical question becomes whether leadership in AI hinges more on breakthroughs or on marginal innovations & scaling. I guess both could be argued for, with the latter being more relevant especially if breakthroughs generally diffuse quickly.
I take as the two other principal points from these sources (though also haven’t read all in full detail): (i) some organizational drawbacks hampering China’s innovation sector, esp. what one might call high-quality innovation (ii) that said, innovation strategies have been updated and there seems to be progress observed in China’s innovation output over time.
I’m at least slightly skeptical about is the journals/citations based metrics, as I’m wary of stats being distorted by English language/US citation-circles. Though that’s more of a side point.
In conclusion, I don’t update my estimate much. The picture painted is mixed anyway, with lots of scope for China to become stronger in innovating any time even if it should now indeed have significant gaps still. I would remain totally unsurprised if many leading AI innovations also come out of China in the coming years (or decades, assuming we’ll witness any), though I admit to remain a lay person on the topic—a lay person skeptical about so-called experts’ views in that domain.
Good counterpoint to the popular, complacent “China is [and will be?] anyway lagging behind in AI” view.
An additional strength
Patience/long-term foresight/freedom to develop AI w/o the pressure from the 4-year election cycle and to address any moment’s political whims of the electorate with often populist policies
I’m a bit skeptical about the popular “Lack of revolutionary thought” assumption. Reminds me a bit of the “non-democracies cannot really create growth” that was taken as a low of nature by much too many 10-20 years ago before today’s China. Keen to read more on it the Lack of revolutionary thought if somebody shares compelling evidence/resources.
Fundamental Research = StateApplied Research = Companies
.. is a common paradigm, and—while grossly too simplified—makes some sense: the latter category has more tangible outputs, shorter payback etc. In line with @Dagon’s comment, at the very least these two broad categories would have to be split for a serious discussion of whether ‘too much’ or ‘too little’ is done by gvmt and/or companies.
I’ve worked in a research startup and saw the same dollar go much further in producing high-quality research outputs than what I’ve directly experienced in some places (and from many more places observed) in the exact same domain in state-sponsored research (academia) where there is often a type Resource Curse dynamics.My impression is, these observations generalize rather well (I’m talking about a general tendency; needless to say, there are many counterexamples; often those wanting to do serious research are exactly attracted by public research opportunities, where they can do great work). Your explanation leaves out this factor; it might explain a significant part of the reluctance of the state to spend more on R&D.
This does not mean the state should not do more (or support more) R&D, but I think there are very important complexities the post leaves out, limiting the explanation power.
Explains the existence of R&D, not a “too much” of it
Thanks, yes, sadly seems all very plausible to me too.
Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.
Trumpeteer numbers: I’d now remove that part from my comment. You’re right. Shallowly my claim could seem substantiated by things like (Atlantic) “For many of Trump’s voters, the belief that the election was stolen is not a fully formed thought. It’s more of an attitude, or a tribal pose.”, but even there upon closer reading, it comes out: In some form, they do (or at least did) seem to believe it. Pardon my shallow remark before checking facts more carefully.
AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today’s AI or also for what we expect for the future.
Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a “bad idea that AI could make us go extinct”, but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his “with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting”, and that’s a hugely relevant question. I just don’t think it’s a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).
Agree with a lot, but major concerns:
I’d bet this is entirely spurious (emphasize added):
And the bigger the shift, the more it reinforces for the followers that their reality is whatever the leader says.Eventually people’s desire to comply overcomes their sense of reality. [...]Trump also changes positions regularly.[...]That explains why Ted Cruz became Trump’s ally. It was not despite Trump’s humiliation of Cruz, but because of it. The Corruption of Lindsey Graham is an excellent in depth read on how another opponent became a devoted supporter. [...]
Do you really believe Cruz’ & co’s official positions have anything to do with their genuine beliefs? That seems a weird idea. Would you have thought the same thing about Tucker Carlson before his behind-the-scenes hate speech about Trump got disclosed? I think the reality of big politics business is too obvious to claim this is in any way about true beliefs.
2. Another point of skepticism is on supposed “70 million devoted followers who hang on [Trump’s] every word”.
I have the impression (maybe simply a hope), a (large) bunch of these may not be as fully brainwashed as we make it to be. I can too easily imagine (and to some degree empathize with) persons not buying all that much from Trump at all, but still liking simply his style, and hating so much the conventional politics, with their anyway constant lies but more hidden than with Trump, that as protest they really ‘love’ him a bit compared to the (in their view) not-less-dirty but more uncanny-as-carefully-hidden.
3. Finally, on AI specifically:
Andrew Ng arguably rather lightheartedly dismissing various AI concerns may also speak against him rather than the other way round.
Granted, nothing more plausible that some AI safety ideas playing well into the hands of the big AI guys, is an effective incentive to have them pushing in that direction, stronger than they otherwise would. One could thus say, if these were the only persons calling out AI dangers and requesting AI safety measures: Mind their incentives! However, this is of course not at all the case. From whichever angle we look at the AI questions in depth, we see severe unsolved risks. Or at least: And a ton of people advocate for these even if they personally would rather have different incentives, or no straightforward incentives in any direction at all.
Of course, this leaves open the scope for big AI guys pushing AI safety regulation in a direction that specifically serves them instead of (only) making the world saver. That would barely surprise anyone. But it substantiates “AI safety as a PR effort” about as much as the fact that ‘climate scientists would loose their job if there was no climate change’ proofs that climate change is a hoax.
That’s so correct. But still so wrong—I’d like to argue.
Because replacing the brain is simply not the same as replacing just our muscles. In all the past we’ve just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.
We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.
Statistics cannot be extrapolated when there’s a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don’t quickly change society all that fundamentally (although quite fundamentally) already, so the ‘this time is different’ guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.
The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born.
So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we’re awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it’d be great to read that.
I think this time is different. The implications simply so much broader, so much more fundamental.
I guess both countries would lose a nuclear war, if for a weird reason we’d really have one between US and CN
In the grand scheme of things, that would not matter much.
If China wants to fully reintegrate Taiwan, it can today, or else simply at lastest in a few years.
I guess if China is not doing that in the near future, the main reason will be that (i) there is simply no big enough value in it and/or (ii) there is significant value for the government to have the Taiwan issue as a story for its citizen to focus on/sort of rally-behind-the-flag effect. But less so the effect of US deterrence.
I wonder whether 1.-5. may often not be so much directly dominate in our head, but instead, mostly:
6. The drowning child situation simply brings out the really strong warrior/fire-fighter instinct in you, so, as a direct disposition you’re willing to sacrifice a lot of comfort to save it
Doesn’t alter the ultimate conclusion of your nice re-experiment much but means a sort of non-selfish reason for your willingness to help with the drowning child, in contrast to the 5 selfish ones (even if evolutionarily, 1.-5 are underlying reasons for why we’re endowed with the instinct for 6.)
Interesting thought. From what I’ve seen from Yan LeCun, he really does seem to consider AI X risk fears mainly as pure fringe extremism; I’d be surprised if he holds back elements of the discussion just to prevent convincing people the wrong way round.
For example Youtube: Yann LeCun and Andrew Ng: Why the 6-month AI Pause is a Bad Idea shows rather clearly his straightforward worldview re AI safety, and I’d be surprised if his simple dismissal—however surprising—of everything doomsday-ish was just strategic.
(I don’t know which possibility is more annoying)
World champion in Chess: “It’s really weird that I’m world champion. It must be a simulation or I must dream or..”
Joe Biden: “It’s really weird I’m president, it must be a simul...”(Donald Trump: “It really really makes no sense I’m president, it MUST be a s..”)
David Chalmers: “It’s really weird I’m providing the seminal hard problem formulation. It must be a sim..”
Rationalist (before finding lesswrong): “Gosh, all these people around me, really wired differently than I am. I must be in a simulation.”
Something seems funny to me in the anthropic reasoning in these examples, and in yours too.
Of course we have one world champion in chess or anything, so a reasoning that means that world champion quasi by definition question’s his champion-ness, seems odd. Then, I’d be lying if I claimed I could not intuitively empathize with his wondering about the odds of exactly him being the world champion among 9 billions.
This leads me to the following, that eventually +- satisfies me:
Hypothetically, imagine each generation has only 1 person, and there’s rebirth: it’s just a rebirth of the same person, in a different generation.
With some simplification:
For 10 000 generations you lived in stone-age conditions
For 1 generation—today—you’re the hinge-of-history generation
X (X being: you won’t live anymore at all as AI killed everything; or you live 1 mio generations happily, served by AI, or what have you).
The 10 000 you’s didn’t have much reason to wonder about hinge of history, and so doesn’t happen to think about it. The one you, in the hinge-of-history generation, by definition, has much reasons to think about the hinge-of-history, and does think about it.
So, it has becomes a bit like a lottery game, which you repeat so many times until you naturally once draw the winning number. At that lucky punch, there’s no reason to think “Unlikely, it’s probably a simulation”, or anything.
I have the impression in the similar way, the reincarnated guy should not wonder about it, neither when his memory is wiped each time, and in the same vein (hm, am I sloppy here? that’s the hinge of my argument) neither you have to wonder too much.
A lot of sympathy for the challenge you describe. My possibly bit half-baked views on this:
If you take your criteria seriously, dating sites where you can answer questions and filter according to answers might help
Knowing that for a marriage to work you anyway need something rather close to radical acceptance, you might simply try to put your abstract criteria a bit into perspective, and focus more on finding a partner that accepts your ways very happily rather than sharing so much of them (there is also some danger in this; maybe the truth ‘lies in the middle’: do that partly, but of course also not too generously; I guess what I mean to say is, reading your text, I get the feeling you might be erring too much on the side of strict requirements, though it’s only a spontaneous guess)
‘Red hair as a serious criterion—really?‘: I think the 1⁄625 sounds like a reasonable candidate for spurious correlation: there is such a large number of characteristics that persons have, that two of your favorite dates sharing one of them does not say much about the relevance of that individual characteristic, statistically speaking. That said, I can believe you simply have a sort of red-hair fetish/preference, but then, I’d for sure think if things fit more generally with a person, her having the ‘right’ hair color as well seems very unlikely to be a major relevant factors for long-term happiness with her.
And good luck!!