If AI is giving us this very Humane writing, I am impressed; very well written, so Kudos. ”...trying to reconstruct Herodotus from a copy that fell into a blender...” Lol
AaronF
All great points. But that is yesterdays question; tomorrows question is when people deliberately use the ambiguity (to the advantage of violence) to get elected mayor or congress, in the name of religion that is explicitly and in practice, anti-religious freedom. [They also use the ambiguity of words and law, to take away arms of defense].
We shall see which way the western world wants to go. Though I imagine that the Dual-sovereignty of the US -Fed and State- will start to clash more heavily. The dissonance will increase. Our agreement is crumbling under the paradox of tolerance.
We’ve also accepted the blatant use of hypocrisy (some amendments are sacred; others not so much. One has a first amendment right to riot and dog-whistle all sorts of discrimination, but not a first amendment right to segregate—though in practice we realize that we can’t do anything about it; save for the token national guard forcing children to comply). Both sides have gone down that road; the paradox has no party affiliation. And if we agree that to be moral, one has sometimes to be immoral, well where is that line?
Well-said.
To add to comment:
OP: “Some examples to illustrate the absurdity of this logic: Mammals live outdoors; therefore, homelessness is good.”
A positive statement would be that, it may be an evolutionary ‘good’ even if distasteful. An example might be that homelessness people may have more partners than a high IQ autist that has a mansion. Or we can say, all else being equal, it is healthier for humans to be outside more, much more than in the modern world. Designed housing and modern urban systems need to take this into account.OP: “Animals are illiterate; therefore, illiteracy is good.”
It may be distasteful, but many studies show that years of education means lower fertility; and that sexual selection TODAY actually does select for genes that are less-intelligent* (ADHD or even bad habits like alcohol and smoking.) Nature works in mysterious ways. A good way is that we need to think quite hard about underlying behaviors. Why are literacy rates so low? And persistently so?
*See: Life without sex: Large-scale study links sexlessness to physical, cognitive, and personality traits, socioecological factors, and DNA: https://www.medrxiv.org/content/10.1101/2024.07.24.24310943v1.full
Missed the forest for the trees. Supply is short, and lots of government regulations distort the market (mainly risk). Federal US has been chronically underperforming with regards to the supply of housing. Extra distortions on the margins in NYC.
“One child is a death sentence. “
Someone who writes this, is not a rational person. Not a reasonable person. Not a well-balanced, measured person. In fact, just one sentence, destroyed the whole pot.
Easily? Those weren’t arms races; and I’d argue that genetic engineering issue is completely based on the inherent limitation and difficulty of the technology; Not an outside agreement to cull the arms-race. Leaded gasoline would harm the individual nations, even without an agreement.
Is there any agreement where a country has agreed to cutting their own horns? (Could argue Russia-US missile agreement; which has been a strategic disaster re China, though it is in quantity not a quality agreement).
I think your argument about the impact and ability of AI is exactly the reason your agreement would never work, never mind that enforcement would be nearly impossible (I doubt the LLM’s are constricted by GPU’s). You are trying to have it both ways, that AI would give a country a decisive and massive edge in development, but it won’t take it because of an agreement? And do so easily? And no other country will defect? (Even NK defected with nukes).
I wish I had your optimism about human nature or animal life in general; you are trying to modify evolution.
“Likewise, risks from competing nation states (e.g China) could be mitigated via existing intentional collaboration strategies—nuclear proliferation management techniques like inspections & intelligence agencies keeping check on each other could feasibly serve as a means for the world to prevent the development of AI. “
This is a word salad that has zero empirical or theoretical foundation. Gunpowder, greenhouse gases, virus pathology and many other fields have shown this to be empirically false. We’d all be better off is there wasn’t arms races and runaway selection (though would we have evolved in the first place?), but denying this fact gets us nowhere.
Fair. I removed it.
“Removed.”
REMOVED
REMOVED
REMOVED
REMOVED
REMOVED
REMOVED
REMOVED
REMOVED
REMOVED
Here is Ole Peters: [Puzzle] “Voluntary insurance contracts constitute a puzzle because they increase the expectation value of one party’s wealth, whereas both parties must sign for such contracts to exist [Answer]: Time averages and expectation values differ because wealth changes are non-ergodic.”
Peters again: “Conceptually, its power derives from a new notion of rationality. Many reasonable models of wealth are non-stationary processes. Observables representing wealth then do not have the ergodic property of Section I, and therefore rationality must not be defined as maximizing expectation values of wealth. Rather, we propose as a null model to define rationality as maximizing the time-average growth of wealth.”
You write: “Kelly betting, on the other hand, assumes a finite bankroll—and indeed, might have to be abandoned or adjusted to handle negative money.” [Negative Interest rate?] Can you explain more? Would love to fit this conceptually into Peter’s Non-ergodic growth rate theory
My main Critique would be that you are protesting too much. If AI increases GDP even just ~+2% yearly (and even for only five to ten years of milking the low hanging AI fruit), that compounds fast. We do compete with other countries, and since we are market leader, it is likely we will able to capture that compounding return. I mean, there is the next 100 Trillion on the table, which could easily double.
Which means we have the most to lose by shutting down the clusters. Our risk matches the reward. I am skeptical of any opinions not part of the group with this upside/downside issue.
How we regulate this work, how we set up systems that mitigate downside risk, while keeping the reward profile high, is a challenge.