After that I was writing shorter posts but without long context the things I write are very counterintuitive. So they got ruined)
This sounds like a rationalization. It seems much more likely the ideas just aren’t that high quality if you need a whole hour for a single argument that couldn’t possibly be broken up into smaller pieces that don’t suck.
Edit: Since if the long post is disliked, you can say “well they just didn’t read it”, and if the short post is disliked you can say “well it just sucks because its small”. Meanwhile, it should in fact be pretty surprising you don’t have any interesting or novel or useful insights in your whole 40 minute post which can’t be explained in a reasonable length of blog post time.
It’s a combination of factors, I got some comments on my posts so I got the general idea:
My writing style is peculiar, I’m not a native speaker
Ideas I convey took 3 years of modeling. I basically Xerox PARCed (attempted and got some results) the ultimate future (billions of years from now). So when I write it’s like some Big Bang: ideas flow in all directions and I never have enough space for them)
One commenter recommended to change the title and remove some tags, I did it
If I use ChatGPT to organize my writing it removes and garbles it. If I edit it, I like having parenthesis within parenthesis
I write a book to solve those problems but mainly human and AI alignment (we better to stop AI agents, it’s suicidal to make them) towards the best possible future, to prevent dystopias, it’ll be organized this way:
I’ll start with the “Ethical Big Bang” (physics can be modeled as a subset of ethics),
will chronologically describe and show binary tree model (it models freedoms, choices, quantum paths, the model is simple and ethicophysical so those things are the same in it) of the evolution of inequality from the hydrogen getting trapped in first stars to
hunter-gatherers getting enslaved by agriculturalist and
finish with the direct democratic simulated multiverse vs dystopia where an AI agent grabbed all our freedoms.
And will have a list of hundreds of AI safety ideas for considering.
This sounds like a rationalization. It seems much more likely the ideas just aren’t that high quality if you need a whole hour for a single argument that couldn’t possibly be broken up into smaller pieces that don’t suck.
Edit: Since if the long post is disliked, you can say “well they just didn’t read it”, and if the short post is disliked you can say “well it just sucks because its small”. Meanwhile, it should in fact be pretty surprising you don’t have any interesting or novel or useful insights in your whole 40 minute post which can’t be explained in a reasonable length of blog post time.
It’s a combination of factors, I got some comments on my posts so I got the general idea:
My writing style is peculiar, I’m not a native speaker
Ideas I convey took 3 years of modeling. I basically Xerox PARCed (attempted and got some results) the ultimate future (billions of years from now). So when I write it’s like some Big Bang: ideas flow in all directions and I never have enough space for them)
One commenter recommended to change the title and remove some tags, I did it
If I use ChatGPT to organize my writing it removes and garbles it. If I edit it, I like having parenthesis within parenthesis
I write a book to solve those problems but mainly human and AI alignment (we better to stop AI agents, it’s suicidal to make them) towards the best possible future, to prevent dystopias, it’ll be organized this way:
I’ll start with the “Ethical Big Bang” (physics can be modeled as a subset of ethics),
will chronologically describe and show binary tree model (it models freedoms, choices, quantum paths, the model is simple and ethicophysical so those things are the same in it) of the evolution of inequality from the hydrogen getting trapped in first stars to
hunter-gatherers getting enslaved by agriculturalist and
finish with the direct democratic simulated multiverse vs dystopia where an AI agent grabbed all our freedoms.
And will have a list of hundreds of AI safety ideas for considering.