[…] they set about looking for ways to make an operating system do most of what the current version of Windows does while being more [compressed].
We have an actual example of this here (also, the last progress report). The punchline is “Personnal computing in one book” (400 pages × 50 lines per page means 20K lines of code). It is meant to do basically the work of Windows + Office + IE + Outlook. And the compilers are included in those 20 thousand lines.
They end up doing a lot of things that are only applicable to their situation, and couldn’t be used to make a much more powerful operating system. For example, they might look for ways to recycle pieces of code, and make particular pieces of code do as many different things in the program as possible.
Well, no.
They do look for ways to maximize code recycling. However, the result is not less power. On the contrary, they achieve unmatched flexibility. Two examples:
Their graphic stack draws everything, from characters on a page to the very windowing system. As a result, if you suddenly want to rotate a window (and its content) by any angle, you just need to write 2 lines of code to add the feature.
Their language stack is tiny, yet quite capable. It goes from assembly to Javascript in less than 2000 lines. As a result, adding a new language (say Prolog) takes typically one or two hundred lines of additional code. That makes domain specific languages cheaper than they used to be.
Now to go from flexibility to power, one does need human input. But at least it’s easier.
(Note that I have swept the runtime performance problems under the carpet. My bet is, if we generalize FPGA-like processors (with memristors?), it won’t matter, because one could optimize the hardware for the software, instead of optimizing the software for the hardware.)
Compression is actually a very important skill for programmers that tends to correlate with experience. More compressed code → less redundancy → less space for inconsistencies to arise on modification.
Now this is really interesting! If we take this and extrapolate it the same way as we did our previous miss-conception, it seems like having so little complexity to work with is an important factor in causing the generality!
predictions from this:
Species with lower mutation rate and more selection pressure, while it should be much better of at first glance, would have the advance much further before reaching similar amounts of generality. (makes for great scifi!)
Approaches to AI involving very minimal, accessible on a low level from within, and entangling the AI with every other function on the actual physical computer, may be a better idea than one would otherwise expect. (which, depending on what you’d expect, might still not be much)
it seems like having so little complexity to work with is an important factor in causing the generality
Probably. My favourite example here are first class functions in programming languages. There is talk about “currying”, “anonymous functions”, “closures”… that needlessly complicates the issue. They look like additional features which complicate the language and make people wonder why they would ever need that.
On the other hand, you can turn this reasoning on its head if you think of functions as mere mathematical objects, like integers. Now the things you can do with integers you can’t do with functions (besides arithmetic), are restrictions. Lifting those restrictions would make your programming language both simpler and more powerful.
Now there’s a catch: all complexity does not lie in arbitrary quirks or restrictions. You need a minimum amount to do something useful. So I’m not sure to what extent the “simplify as much as you can” can generalize. It sure is very helpful when writing programs,
There’s a catch however: the complexity I remove here was completely destructive. Here using the general formulae for edge cases merely lifted restrictions! I’m not sure that’s always the case. You do need a minimum amount of complexity to do anything. For instance, Windows could fit in a book if Microsoft cared about that, so maybe that’s why it (mostly) doesn’t crash down in flames. On the other hand, something that really cannot fit in less than 10 thousand books is probably beyond our comprehension. Hopefully a seed FAI will not need more than 10 books. But we still don’t know everything about morality and intelligence.
Intuitively, the complexity of the program would have to match the complexity of the problem domain. If it’s less, you get lack of features and customizability. If it’s more, you get bloat.
We have an actual example of this here (also, the last progress report). The punchline is “Personnal computing in one book” (400 pages × 50 lines per page means 20K lines of code). It is meant to do basically the work of Windows + Office + IE + Outlook. And the compilers are included in those 20 thousand lines.
Well, no.
They do look for ways to maximize code recycling. However, the result is not less power. On the contrary, they achieve unmatched flexibility. Two examples:
Their graphic stack draws everything, from characters on a page to the very windowing system. As a result, if you suddenly want to rotate a window (and its content) by any angle, you just need to write 2 lines of code to add the feature.
Their language stack is tiny, yet quite capable. It goes from assembly to Javascript in less than 2000 lines. As a result, adding a new language (say Prolog) takes typically one or two hundred lines of additional code. That makes domain specific languages cheaper than they used to be.
Now to go from flexibility to power, one does need human input. But at least it’s easier.
(Note that I have swept the runtime performance problems under the carpet. My bet is, if we generalize FPGA-like processors (with memristors?), it won’t matter, because one could optimize the hardware for the software, instead of optimizing the software for the hardware.)
Compression is actually a very important skill for programmers that tends to correlate with experience. More compressed code → less redundancy → less space for inconsistencies to arise on modification.
Now this is really interesting! If we take this and extrapolate it the same way as we did our previous miss-conception, it seems like having so little complexity to work with is an important factor in causing the generality!
predictions from this:
Species with lower mutation rate and more selection pressure, while it should be much better of at first glance, would have the advance much further before reaching similar amounts of generality. (makes for great scifi!)
Approaches to AI involving very minimal, accessible on a low level from within, and entangling the AI with every other function on the actual physical computer, may be a better idea than one would otherwise expect. (which, depending on what you’d expect, might still not be much)
Probably. My favourite example here are first class functions in programming languages. There is talk about “currying”, “anonymous functions”, “closures”… that needlessly complicates the issue. They look like additional features which complicate the language and make people wonder why they would ever need that.
On the other hand, you can turn this reasoning on its head if you think of functions as mere mathematical objects, like integers. Now the things you can do with integers you can’t do with functions (besides arithmetic), are restrictions. Lifting those restrictions would make your programming language both simpler and more powerful.
Now there’s a catch: all complexity does not lie in arbitrary quirks or restrictions. You need a minimum amount to do something useful. So I’m not sure to what extent the “simplify as much as you can” can generalize. It sure is very helpful when writing programs,
There’s a catch however: the complexity I remove here was completely destructive. Here using the general formulae for edge cases merely lifted restrictions! I’m not sure that’s always the case. You do need a minimum amount of complexity to do anything. For instance, Windows could fit in a book if Microsoft cared about that, so maybe that’s why it (mostly) doesn’t crash down in flames. On the other hand, something that really cannot fit in less than 10 thousand books is probably beyond our comprehension. Hopefully a seed FAI will not need more than 10 books. But we still don’t know everything about morality and intelligence.
Intuitively, the complexity of the program would have to match the complexity of the problem domain. If it’s less, you get lack of features and customizability. If it’s more, you get bloat.
What about 10 thousand cat videos? :p
But yea, upvoted.