You have it backwards. The message is not the data they send, but the medium they use for sending it.
When the combined brainpower of earth turns to analyze the message, the first inquiry shouldn’t be what pattern they form, but how can you form a pattern across millions of light years.
At that moment you drop any hypotheses that negate that possibility, and focus only on those that are corroborated.
You use the combined brainpower of earth and you have individual or small groups of scientist working on all the hypotheses they can imagine. The only important thing is that they work in parallel creating as many hypotheses as possible. As you falsify hypotheses you arrive to a better description of the universe.
Although a small group of empirical scientist keep track of the message for millenniums, but the rest of the humanity moves into a new paradigm.
Within one generation you find a practical use for the new theoretical physics, you invade the alien species realm and create a new kind of Spam out of their flesh.
My point, you don’t need data to derive laws, you only need it to falsify laws you imagined. A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification. I am not sure we think of theories, as you say. Although we just don’t understand yet how we imagine them, my guess is that the breakthrough process is some form of paralel computing that starts with infinite possibilities and moves on through falsification until it arrives to an “idea”, which needs to go trough a similar process on the outside world.
To summarize the wall of text: “AI is impossible (edit: or at least superhuman AI is) because humans use infinite computing power.”. I’m still slightly disappointed that nobody responded, but now I can see why.
Ya, that and “A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification.”
ie “humans have a magickal ability to think outside the box—which an AI can never have and thus can never think new thoughts”
You have it backwards. The message is not the data they send, but the medium they use for sending it. When the combined brainpower of earth turns to analyze the message, the first inquiry shouldn’t be what pattern they form, but how can you form a pattern across millions of light years. At that moment you drop any hypotheses that negate that possibility, and focus only on those that are corroborated. You use the combined brainpower of earth and you have individual or small groups of scientist working on all the hypotheses they can imagine. The only important thing is that they work in parallel creating as many hypotheses as possible. As you falsify hypotheses you arrive to a better description of the universe. Although a small group of empirical scientist keep track of the message for millenniums, but the rest of the humanity moves into a new paradigm. Within one generation you find a practical use for the new theoretical physics, you invade the alien species realm and create a new kind of Spam out of their flesh. My point, you don’t need data to derive laws, you only need it to falsify laws you imagined. A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification. I am not sure we think of theories, as you say. Although we just don’t understand yet how we imagine them, my guess is that the breakthrough process is some form of paralel computing that starts with infinite possibilities and moves on through falsification until it arrives to an “idea”, which needs to go trough a similar process on the outside world.
To summarize the wall of text: “AI is impossible (edit: or at least superhuman AI is) because humans use infinite computing power.”. I’m still slightly disappointed that nobody responded, but now I can see why.
Ya, that and “A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification.”
ie “humans have a magickal ability to think outside the box—which an AI can never have and thus can never think new thoughts”