It’s a combination of factors, I got some comments on my posts so I got the general idea:
My writing style is peculiar, I’m not a native speaker
Ideas I convey took 3 years of modeling. I basically Xerox PARCed (attempted and got some results) the ultimate future (billions of years from now). So when I write it’s like some Big Bang: ideas flow in all directions and I never have enough space for them)
One commenter recommended to change the title and remove some tags, I did it
If I use ChatGPT to organize my writing it removes and garbles it. If I edit it, I like having parenthesis within parenthesis
I write a book to solve those problems but mainly human and AI alignment (we better to stop AI agents, it’s suicidal to make them) towards the best possible future, to prevent dystopias, it’ll be organized this way:
I’ll start with the “Ethical Big Bang” (physics can be modeled as a subset of ethics),
will chronologically describe and show binary tree model (it models freedoms, choices, quantum paths, the model is simple and ethicophysical so those things are the same in it) of the evolution of inequality from the hydrogen getting trapped in first stars to
hunter-gatherers getting enslaved by agriculturalist and
finish with the direct democratic simulated multiverse vs dystopia where an AI agent grabbed all our freedoms.
And will have a list of hundreds of AI safety ideas for considering.
It’s a combination of factors, I got some comments on my posts so I got the general idea:
My writing style is peculiar, I’m not a native speaker
Ideas I convey took 3 years of modeling. I basically Xerox PARCed (attempted and got some results) the ultimate future (billions of years from now). So when I write it’s like some Big Bang: ideas flow in all directions and I never have enough space for them)
One commenter recommended to change the title and remove some tags, I did it
If I use ChatGPT to organize my writing it removes and garbles it. If I edit it, I like having parenthesis within parenthesis
I write a book to solve those problems but mainly human and AI alignment (we better to stop AI agents, it’s suicidal to make them) towards the best possible future, to prevent dystopias, it’ll be organized this way:
I’ll start with the “Ethical Big Bang” (physics can be modeled as a subset of ethics),
will chronologically describe and show binary tree model (it models freedoms, choices, quantum paths, the model is simple and ethicophysical so those things are the same in it) of the evolution of inequality from the hydrogen getting trapped in first stars to
hunter-gatherers getting enslaved by agriculturalist and
finish with the direct democratic simulated multiverse vs dystopia where an AI agent grabbed all our freedoms.
And will have a list of hundreds of AI safety ideas for considering.