M. Y. Zuo(Michael Y. Zuo)
Employees aren’t kept long enough to justify training them!
This is actually a benefit in disguise, at least for the efficiency of management in large organizations. And is probably sufficient to explain a large chunk of the 2x difference.
The hyper effective self learners who thrive in this paradigm and get promoted end up being smarter per unit time than even the best japanese employees. Which translates to being smarter overall after several promotion.
I.e. every minute spent on something reduces their attainable competence somewhere else as schedules are maxed out once you reach middle management. There’s only 24 hours a day after all.
That’s true but you still have let’s say 2^1000000 afterwards.
Why does it matter? ‘Vibes’ are nowhere near as good as satisfying shareholders sufficiently or having enough money in the bank account to be a credible operating business, at least in market economies, certainly I imagine Comcast decision makers would care a lot more about the actual legally binding concerns more than all the good ‘vibes’ in the world.
e.g. If their financials seem shaky one day and they could somehow double their cashflow by sacrificing ‘vibes’, they would gladly welcome all the bad ‘vibes’ you could possibly have, times a million. It literally would be a welcome relief to accept this in exchange for more money.
There’s actually a meta-status problem with any group discussion of status, namely that if the group members judge it to have a below average chance of winning a status competition, in whatever sphere of activity they are engaged in, then its members have incentives to block or ignore the discussion.
Or even downplay the group itself, its quality, etc…, if they can’t prevent the discussion, much like hunting groups for meat. This especially applies for group members who perceive themselves to be in the most marginal, low status, cohort.
The core reason is nobody wants to be known as a 100% guaranteed loser, so anyone who already has below average prospects is going to feel extremely sensitive about even the slightest chance of the group losing future status competitions and thus dragging them down even further.
Although this doesn’t apply to the most valuable group members, who presumably view themselves as having above average status, the opposite problem occurs, namely that actually winning a status competition might attract people who are above them into joining, and thus diluting their own influence, or even worse, relegating them to the second tier. (this doesn’t apply if the group is already at the very highest level)
So paradoxically only the ‘middle-class’ members reliably do anything more than empty talking, at least for status constrained issues. Literally everyone else has incentives to talk a big game but also prevent anything decisive.
That seems to be an argument for something more than random noise going on, but not an argument for ‘LLMs are shuggoths’?
This definition seems so vague and broad as to be unusuable.
Both are bad, but only one of them necessarily destroys everything I value.
You don’t value the Sun, or the other stars in the sky?
Even in the most absurdly catastrophic scenarios it doesn’t seem plausible that they could be ‘necessarily destroyed’.
The shorter the better. Or as Lao Tzu said, Those who know don’t talk. Those who talk don’t know…
The disclaimer doesn’t need to enumerate a full list, as long as it points out that a nebulous cloud of potential and actual caveats exists and may apply is sufficient.
The threshold still has to be greater than zero power for its ‘care’ to matter one way or the other. And the risk that you mention needs to be accepted as part of the package, so to speak.
So who gets to decide where to place it above zero?
Why not add a disclaimer spelling out that what’s written could be false or misleading depending on the caveats?
“AI Safety”, especially enforcing anything, does pretty much boil down to human alignment, i.e. politics, but there are practically zero political geniuses among its proponent, so it needs to be dressed up a bit to sound even vaguely plausible.
It’s a bit of a cottage industry nowadays.
Wouldn’t that imply the existence of this essay, available for anyone passing by to read, is a net negative?
Like the parent said “Deport all Rationalists” or even “Deport everyone named Arturo Macias” are entirely feasible to accomplish with available resources…
It seems like the more important issue is who gets to decide what to vote on and what is presented for voting?
e.g. if the limit is say 1 vote per day, allowing for sufficient time for reflection and study of the issue at hand assuming perfect allocation of time, there’s still way more then 365 possible things a year to vote on.
If the agent had no power whatsoever to effect the world then it wouldn’t matter if it cared or not.
So the real desire is that it must have a sufficient amount, but not over some threshold that will prove to be too frightening.
Who gets to decide this threhsold?
But an even larger flaw is that they have very small filter areas for no apparent reason.
Is reducing cost of manufacturing filters ‘no apparent reason’?
It seems like literally the most important reason… the profit margin of selling replacement filters would be heavily reduced, assuming pricing remains the same.
That’s a really neat point, has it ever been addressed in prior literature, that you’ve gone over?
Thanks, you’ve listed some plausible downsides, but the upsides also need to be enumerated too, and then likely several stages of synthesis to arrive at a final, persuasive, argument, one way or the other. I’m not saying you have to do all this work, just that someone does in order to advance the argument.
So far I’ve never seen such, anywhere online.
Just because the US government contains agents that care about market failures, does not mean that it can be accurately modeled as itself being agentic and caring about market failures.
I agree, just the fact that it contains such does not necessarily imply anything for or against. e.g. It’s entirely possible for two or more far flung branches of the USG to work towards opposite ends and end up entirely negating each another.
The more detailed argument would be public choice theory 101, about how the incentives that people in various parts of the government are faced with may or may not encourage market-failure-correcting behavior.
Can you lay out this argument with more detail?
There’s a market for lemons problem, similar to the used car market, where neither the therapist nor customer can detect all hidden problems, pitfalls, etc., ahead of time. And once you do spend enough time to actually form a reasonable estimate there’s no takebacks possible.
So all the actually quality therapists will have no availability and all the lower quality therapists will almost by definition be associated with those with availability.
Edit: Game Theory suggests that you should never engage in therapy or at least never with someone with available time, at least until someone invents the certified pre-owned market.