All of the sense organs are connected to the central nervous system via neural pathways. At a limit you could take make a crosssection of all the nervefibers have each microfiber occupy one digit and then it either being in a spike state 1 or not 0 would cover all of your experience pretty clearly or atleast have the same information content (Making the temporal slicing fair might be a bit tricky but higher frequency than any circuits use will suffice). It doesn’t matter if it comes from a eye or ear, it WILL need to use that interwening format. So while a packet a size of a million might be a lot, guaranteed universal coverage for fixed finite length can buy us quite a lot in theory landia.
Or one could use the natural format of the experience formers. Maybe have 100x100 array of 3 digits of RGB linearised. Have a few digits for smell and use a symbol for each type of olfactroy receptor that humans have. Whatever the range of data your measurers can produce just use a encoding that has universal coverage. We will be going over all unstructured programs anyway so having it be written in terms of weird color and smell mixing qualia standins is not making it any less approachable or readble. It is already 0 readable so can’t go down from there! The UTMs will cover these “color machines” anyway. And if you have super distaste for arbitrary symbols just number all of them and use integers as standins (and use binary sequences as standins for integers).
One could do this at a higher conceptual level to try to manage the combinatorial explosion but then you have the issue that there are multiple low level sensations that correspond to only 1 high level sensation. If your problem guarantees the boundary conditions this can be fine but part of the reason we are going modelless is that we don’t want to get trapped by moddeling assumptions. Can’t make a wrong modelling assumtion if you don’t make any, its all just unconceptualised raw sense-data. With regards to speed questions I would think that the balance is more about percentage of problems we flat out fail or have no hope of working vs percentage of problems we are too slow to effectively do. No need to be able to deal with red alligators if the jungle doesn’t have any, and being 10% slower on green alligators but having universal alligator coverage isn’t exactly sexy from natural selection standpoint.
About encoding experiences
All of the sense organs are connected to the central nervous system via neural pathways. At a limit you could take make a crosssection of all the nervefibers have each microfiber occupy one digit and then it either being in a spike state 1 or not 0 would cover all of your experience pretty clearly or atleast have the same information content (Making the temporal slicing fair might be a bit tricky but higher frequency than any circuits use will suffice). It doesn’t matter if it comes from a eye or ear, it WILL need to use that interwening format. So while a packet a size of a million might be a lot, guaranteed universal coverage for fixed finite length can buy us quite a lot in theory landia.
Or one could use the natural format of the experience formers. Maybe have 100x100 array of 3 digits of RGB linearised. Have a few digits for smell and use a symbol for each type of olfactroy receptor that humans have. Whatever the range of data your measurers can produce just use a encoding that has universal coverage. We will be going over all unstructured programs anyway so having it be written in terms of weird color and smell mixing qualia standins is not making it any less approachable or readble. It is already 0 readable so can’t go down from there! The UTMs will cover these “color machines” anyway. And if you have super distaste for arbitrary symbols just number all of them and use integers as standins (and use binary sequences as standins for integers).
One could do this at a higher conceptual level to try to manage the combinatorial explosion but then you have the issue that there are multiple low level sensations that correspond to only 1 high level sensation. If your problem guarantees the boundary conditions this can be fine but part of the reason we are going modelless is that we don’t want to get trapped by moddeling assumptions. Can’t make a wrong modelling assumtion if you don’t make any, its all just unconceptualised raw sense-data. With regards to speed questions I would think that the balance is more about percentage of problems we flat out fail or have no hope of working vs percentage of problems we are too slow to effectively do. No need to be able to deal with red alligators if the jungle doesn’t have any, and being 10% slower on green alligators but having universal alligator coverage isn’t exactly sexy from natural selection standpoint.