There is something that feels “wrong” for me in this proposal, it’s that it doesn’t account for Moore’s law. Sending objects to space is expensive (even with a launch loop or a space elevator, it would still be expensive, even if much less than with traditional rockets), so you can’t renew the “server sky” every few years. But with Moore’s law, computers a few years ahead are much more powerful than the computers of now. Launch “server sky” now, and with Moore’s law, in 10 years, we can make servers 32 times faster… but we can’t change the ones in orbit.
Other problem is the ping : with a distance of 12789 km, you get at best a ping 85ms, assuming no other delay and that the nearest satellite can answer you directly, which will rule out many possible usages.
In a recent reply to the comments on Brin’s post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves.
He acknowledges that ping times are going to be limited, and lower than you can theoretically get with a fat pipe, but it is still much better than you get with GEO.
The m288 central orbit can be seen at 58 degrees north and south latitude, at a distance of 10500 km. The round trip ping time is 70 milliseconds. The ground ping time through optical fiber across the United States is faster in theory, but ground networks are slowed by switches and indirect routes. Ping times from fat-pipe servers in Dallas Texas to mit.edu are 42 milliseconds , and to orst.edu are 49 milliseconds, so 70msec is not way out of line. However, much of the routing will travel “around the cloud”, and without local caching in the “near” links, some pings may need as much as 200 milliseconds to hop from the far side of the orbit. Still, this is better than the 250+ millisecond ping time through a geosynchronous satellite.
For lots of processor-heavy things (mining bitcoin, rendering animations, what have you) it isn’t especially crucial. High frequency stock trading is probably out.
For lots of processor-heavy things (mining bitcoin, rendering animations, what have you) it isn’t especially crucial.
The key thing about those isn’t that they’re processor heavy; it’s that they’re very parallelizable, and have minimal data dependencies between subtasks. For an example of something that isn’t like this, calculating scrypt hashes is very processor-heavy, but is provably Hard to parallelize.
I suspect that most interesting calculations will bottleneck on communication latency.
There is something that feels “wrong” for me in this proposal, it’s that it doesn’t account for Moore’s law. Sending objects to space is expensive (even with a launch loop or a space elevator, it would still be expensive, even if much less than with traditional rockets), so you can’t renew the “server sky” every few years. But with Moore’s law, computers a few years ahead are much more powerful than the computers of now. Launch “server sky” now, and with Moore’s law, in 10 years, we can make servers 32 times faster… but we can’t change the ones in orbit.
Other problem is the ping : with a distance of 12789 km, you get at best a ping 85ms, assuming no other delay and that the nearest satellite can answer you directly, which will rule out many possible usages.
In a recent reply to the comments on Brin’s post, Keith Lofstrom mentions using obsolete sats as ballast for much thinner sats that would be added to the arrays as the manufacturing process improves.
He acknowledges that ping times are going to be limited, and lower than you can theoretically get with a fat pipe, but it is still much better than you get with GEO.
For lots of processor-heavy things (mining bitcoin, rendering animations, what have you) it isn’t especially crucial. High frequency stock trading is probably out.
The key thing about those isn’t that they’re processor heavy; it’s that they’re very parallelizable, and have minimal data dependencies between subtasks. For an example of something that isn’t like this, calculating scrypt hashes is very processor-heavy, but is provably Hard to parallelize.
I suspect that most interesting calculations will bottleneck on communication latency.