M. Y. Zuo(Michael Y. Zuo)
“We can group projects into subprojects without changing the overall return”
What if this were not true? Would that make the problem intractable?
Could the hypothetical AGI be developed in a simulated environment and trained with proportionally lower consequences?
That is true, the desired characteristics may not develop as one would hope in the real world. Though that is the case for all training, not just AGI. Humans, animals, even plants, do not always develop along optimal lines even with the best ‘training’, when exposed to the real environment. Perhaps the solution you are seeking for, one without the risk of error, does not exist.
Therefore, determinism is impossible? You’ve demonstrated quite a neat way of showing that reality is of unbounded complexity whereas the human nervous system is of course finite and as such everything we ‘know’, and everything that can be ‘known’, necessarily is, in some portion, illusory.
You‘ve really put some thought into this, thanks for sharing.
Though I don’t want to make a critique I would like to save you a bit of future trouble as a courtesy from someone who has trodden down the same path.
The issue with basing a philosophy on Mozi is that there are no ‘fixed standards’. All standards, like the rest of the universe, are forever in flux. Universal frameworks can not exist.
For the next stage I found reading Liezi was helpful.
Right, for determinism to work in practice, some method of determining that ‘previous world state’ must be viable. But if there are no viable methods, and if somehow that can be proven, then we can be confident that determinism is impossible, or at the very least, that determinism is a faulty idea.
When you said ’not directly extensible’ I understood that as meaning ‘logistically impossible to perfectly map onto a model communicable to humans’. With the fishes fluctuating in weight, in reality, between and during every observation, and between every batch. So even if, perfect, weight information was obtained somehow, that would only be for that specific Planck second. And then averaging, etc., will always have some error inherently. So every step on the way there is a ’loose coupling’, so that the final product, a mental-model of what we just read, is partially illusory.
Perhaps I am misunderstanding?
Though to me it seems clear, there will always be extra bits of information, of specification, that cannot be captured in any model. Regardless of our further progression in modelling. Whether that’s from an abstract model to a finer-grained model, or from a finer-grained model to a whole universe atomic simulation, or from a whole universe atomic simulation to actual reality.
“All Nature is but art, unknown to thee
All chance, direction, which thou canst not see;
All discord, harmony not understood;
All partial evil, universal good.”Alexander Pope
It seems that your comment got cut off at the end there.
Referencing Leo Szilard is amusing for this topic as his moment of genius insight, that the atomic nucleus can be split to generate enormous explosions, is one of those few ideas so genuinely beyond the then current paradigm (1930’s) that it seems like real precognition.
Allegedly he spent a significant fraction of every day sitting in a hotel bathtub in rumination, and he lived permanently in hotels for decades. I assume that is how he developed that depth of thinking.
Interesting idea, though the first round seems unverifiable. “How many nuclear weapons will states possess on December 31, 2022?”
Well, if we were to know that assertion is unprovable, or undecidable, then we can treat it as any other unprovable assertion.
Ah, I understand what your getting at now dxu, thanks for taking the time clarify. Yes, there likely are not extra bits of information hiding away somewhere, unless there really are hidden parameters in space-time (as one of the possible resolutions to Bell’s theorem).
When I said ‘there will always be’ I meant it as ‘any conceivable observer will always encounter an environment with extra bits of information outside of their observational capacity’, and thus beyond any model or mapping. I can see how it could have been misinterpreted.
In regards to my comment on determinism, that was just some idle speculation which TAG helpfully clarified.
Perhaps it’s our difference in perspective but the very paragraph you quoted in your comment seems to indicate that our perceptive faculties will always contain uncertainties, resulting classification errors, and therefore correspondence mismatch.
I’m then extrapolating to the consequence that we will then always be subject to ad-hoc adjustments to adapt, as the ambiguity, uncertainties, etc., will have to be translated into concrete actions which are needed in order for us to continue to exist. This then results in an erroneous mental model, or what I term as ‘partially illusory knowledge’.
It’s a bit of an artistic flair but I make the further jump to consider that since all real objects are in fact constantly fluctuating at the Planck scales, in many different ways, every possible observation must lead to, at best, ’partially illusory knowledge’. Since even if there’s an infinitesimally small variance that still counts as a deviation from ‘completely true knowledge’. Maybe I’m just indulging in word games here.
Since there’s not a question at the end or some further anchor for productive discussion I presume you are hoping for commentators to offer some. Here’s one:
People’s demands for positional goods likely rise and fall in correlation with the change in perceived value of those positional goods. So if we desire less competition for such goods then it would make sense to try and lower the incentives to attain them.
To me it seems sensible that to lower the incentives would require a lowering of the perceived value of such goods, though other ways may prove to be even more fruitful.
To lower the perceived value of positional goods would require what methods?
To lower the actual value of positional goods would require what methods?
What is the relationship between perceived and actual value in this case? Are they too intertwined to separate in this manner?
Can these methods, whatever they may be, even be carried out? (either individually or in combination)
What are the requirements to do so?
And what are the implications for society if such a change occurred?
In regards to your fears: “It highlights our general failure to do helpful things, and plausibly blames all our supply chain (and also plausibly all our civilizational) problems on stupid pointless rules and a failure to do obviously correct things.”
Adam Smith may offer some enlightenment on the underlying reasons:
“The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might choose to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.”
Adam Smith
Attention will always be scarce, and attention is a very significant and valuable resource. That seems to indicate a ‘post-scarcity’ society is fundamentally impossible.
“The wealth required by nature is limited and is easy to procure; but the wealth required by vain ideals extends to infinity.”—Epicurus
When people are young, typically they have multiple such vain ideals, which infinitely exceed their available resources. So they engage in many speculative activities, even ones with exceedingly low expected rates of returns, or even activities with negative rates as humans also typically lack the perceptiveness to distinguish between, in the hopes of securing themselves their objects of desire.
When people grow old, typically their resources have grown greater, or their ideals have been reduced likewise through the course of life, or both have occurred.
Thanks for the very interesting dynamic you’ve presented. It seems to be a subset of the coordination problems seen in iterative prisoner dilemma games when there are more than 2 players, I’m not sure what exact name to call it.
I imagine this is the primary logistical reason why pyramid-like hierarchies formed in the first place in human societies, to solve such coordination problems.
Would the StackExchange model work? Granting ranks on the basis of productive contribution, along with privileges, ultimately recruiting moderators from the highest ranks.
Interesting post!
Query: How do you define ‘feasibly’? as in ‘Incentive landscapes that can’t feasibly be induced by a reward function’
As from my perspective all possible incentive landscapes can be induced by reward, with sufficient time and energy. Of course a large set of these are beyond the capacity of present human civilization.