Robin talks about a class system where those running at higher speeds are in a higher social class. But given Robin’s assumption that cost for emulation speed will be linear up to 1000000x, I don’t see why anyone (except those with physical work, which Robin estimates at 20% of the population) wouldn’t want to run at the top end of this range. In his scenario of a boss running at 21x the speed of the workers, why isn’t the whole team being run at the higher speed? Does anyone understand his reasoning here?
In order for faster ems to talk to each other naturally, they have to be closer to each other, and thus occupy more expensive prime real estate. So they don’t want to be faster than they need to be to match the other tasks with which they coordinate.
That doesn’t make intuitive sense to me. Surely even fast ems living in cheap real estate will still have plenty (millions? billions?) of people to talk to in real time, even if they have fewer people to talk to compared to those living in prime real estate? Given that running slower has significant costs (you have to pay the same storage costs as faster ems but can do less work, not being able to accumulate experience/knowledge as fast as others and losing competitive edge as a result) I don’t see how it’s worth those costs just to have more potential people to talk to.
Also, if you’re assuming that it’s fairly cheap to speed up and slow down emulations, which you apparently are, why don’t they run at a fast speed normally and only slow down when they need to talk to distant others with low subjective latency, which may be pretty rare?
For any given task there will be particular people you need to talk to to get it done. I expect hardware would be specialized for particular speeds, but that minds could be moved between hardware of differing speeds in order to change speeds. In general most tasks have a particular time they need to be done, with only minor rewards for doing it much sooner.
In his scenario of a boss running at 21x the speed of the workers, why isn’t the whole team being run at the higher speed? Does anyone understand his reasoning here?
How fast you run these employees depends on the economics of your industry, I think the idea is that coordination failure is expensive. Thus if running bosses faster than workers avoids such failures it seems more justified running bosses faster than nearly any other kind of worker. The value of good management in a large company is much higher than the productivity boost any one low level worker could acheive. He touches this when he notes that it is vital the most competent people are as high up the chain as possible.
Sorry, I think I didn’t explain well enough why it doesn’t make sense to me, so let me try again. In his example there are 256 workers and 64 line bosses running at 1x, and a CEO running at 21x. Why not instead have 16 workers, 4 line bosses, 1 CEO, all running at 16x, which would do the same amount of work in the same amount of time? If we assume that 21x is the maximum feasible emulation speed, it doesn’t seem plausible that slowing down the workers to 1x saves enough money (compared to running them at 16x) to make up for increasing the memory requirement by 16 times.
Theoretically you’d be running each of the 256 workers at 1,000,000x speed already. The boss goes into 21,000,000x speed, but has to pay a non-linear cost for that so you can only have one person at that speed. It would require a very particular price/speed discrimination structure to make that viable though.
The other option is that you’re in a job that needs 256 different skill sets and we haven’t learned how to swap out parts of people’s personality yet. Eg, you’re translating a book into 256 languages and each person only knows 1 language.
Although neither scenario strikes me as particularly likely.
In his example there are 256 workers and 64 line bosses running at 1x, and a CEO running at 21x. Why not instead have 16 workers, 4 line bosses, 1 CEO, all running at 16x, which would do the same amount of work in the same amount of time?
Evidence suggests that coordination is hard.
That 16 workers running at 16x overseen by 4 line bosses and 1 CEO will suffer more coordination failure that can be avoided by running 256 workers at 1x with one CEO running at 21x, seems plausible and even likely if you assume these are human-like minds.
That 16 workers running at 16x overseen by 4 line bosses and 1 CEO will suffer more coordination failure that can be avoided by running 256 workers at 1x with one CEO running at 21x, seems plausible and even likely if you assume these are human-like minds.
Robin’s scenario is 256 workers and 64 line bosses running at 1x plus one CEO running at 21x who directly oversees the line bosses not the workers (you can check for yourself here). This offers no coordination advantages over having 16 workers, 4 line bosses and one CEO all running at 16x, as far as I can tell.
I’m not sure about that. What I’ve seen of the management literature suggests that the complexity of coordination and oversight problems is strongly nonlinear on the number of workers overseen, and clocking faster would produce only linear improvements. It might still make sense to run the CEO at a faster clockspeed, since that role has to deal with additional coordination problems that aren’t entirely under the company’s control, but this line of thought suggests to me that smaller numbers of more individually productive workers would be more efficient overall than maximizing the workforce size.
Robin may have been assuming abundant memory and scarce CPU time? I agree though that unless memory costs are very low this is a problem in the examples.
Robin may have been assuming abundant memory and scarce CPU time?
He’s not saving on CPU time (i.e., total number of instructions executed), but substituting more, slower processors for fewer, faster processors and also using more memory. We don’t see a lot of this today. For example render farms and data centers all use essentially the fastest CPUs available. Some operations might back off a few notches from the bleeding edge in order to save money, but it’s not even close to 2x much less 21x. My earlier “doesn’t seem plausible” may be too strong, but I don’t understand why Robin seems to be predicting this as the most likely scenario. If he has some specific reasons why the economics will likely work out this way, I’d very much like to see it.
We don’t see a lot of this today...it’s not even close to 2x much less 21x.
We see plenty of this today. Every processor you use with multiple slower cores rather than a single core screaming at 4ghz is making the slow parallel vs fast serial tradeoff. Processor migration and power-saving modes are other examples where the tradeoff is made dynamically ARM processors are hugely abundant in embedded and mobile spaces, and the ARM design is an example of trading off CPU time for other things like reduced transistor count or (especially) power consumption. ARM or Atom chips are making inroads into datacenters because power consumption & cooling are becoming such issues, and we can expect parallelisation to continue for power saving.
Hmm, apparently my knowledge of server hardware was a bit outdated. ARM processors being used by data centers are running at about 1.5 ghz, and it looks like with extreme overclocking it’s possible to push x86 processors up to 8 ghz which gives a factor of about 5x. So probably there will be some significant difference between the fastest and slowest uploads, and 21x may not be totally implausible.
One benefit of running on a lower speed is that you can interact with things farther away from you while it still seems instantaneous. although i have no idea why that would be more important for the workers than for the boss.
Robin talks about a class system where those running at higher speeds are in a higher social class. But given Robin’s assumption that cost for emulation speed will be linear up to 1000000x, I don’t see why anyone (except those with physical work, which Robin estimates at 20% of the population) wouldn’t want to run at the top end of this range. In his scenario of a boss running at 21x the speed of the workers, why isn’t the whole team being run at the higher speed? Does anyone understand his reasoning here?
In order for faster ems to talk to each other naturally, they have to be closer to each other, and thus occupy more expensive prime real estate. So they don’t want to be faster than they need to be to match the other tasks with which they coordinate.
That doesn’t make intuitive sense to me. Surely even fast ems living in cheap real estate will still have plenty (millions? billions?) of people to talk to in real time, even if they have fewer people to talk to compared to those living in prime real estate? Given that running slower has significant costs (you have to pay the same storage costs as faster ems but can do less work, not being able to accumulate experience/knowledge as fast as others and losing competitive edge as a result) I don’t see how it’s worth those costs just to have more potential people to talk to.
Also, if you’re assuming that it’s fairly cheap to speed up and slow down emulations, which you apparently are, why don’t they run at a fast speed normally and only slow down when they need to talk to distant others with low subjective latency, which may be pretty rare?
For any given task there will be particular people you need to talk to to get it done. I expect hardware would be specialized for particular speeds, but that minds could be moved between hardware of differing speeds in order to change speeds. In general most tasks have a particular time they need to be done, with only minor rewards for doing it much sooner.
How fast you run these employees depends on the economics of your industry, I think the idea is that coordination failure is expensive. Thus if running bosses faster than workers avoids such failures it seems more justified running bosses faster than nearly any other kind of worker. The value of good management in a large company is much higher than the productivity boost any one low level worker could acheive. He touches this when he notes that it is vital the most competent people are as high up the chain as possible.
Sorry, I think I didn’t explain well enough why it doesn’t make sense to me, so let me try again. In his example there are 256 workers and 64 line bosses running at 1x, and a CEO running at 21x. Why not instead have 16 workers, 4 line bosses, 1 CEO, all running at 16x, which would do the same amount of work in the same amount of time? If we assume that 21x is the maximum feasible emulation speed, it doesn’t seem plausible that slowing down the workers to 1x saves enough money (compared to running them at 16x) to make up for increasing the memory requirement by 16 times.
Theoretically you’d be running each of the 256 workers at 1,000,000x speed already. The boss goes into 21,000,000x speed, but has to pay a non-linear cost for that so you can only have one person at that speed. It would require a very particular price/speed discrimination structure to make that viable though.
The other option is that you’re in a job that needs 256 different skill sets and we haven’t learned how to swap out parts of people’s personality yet. Eg, you’re translating a book into 256 languages and each person only knows 1 language.
Although neither scenario strikes me as particularly likely.
Evidence suggests that coordination is hard.
That 16 workers running at 16x overseen by 4 line bosses and 1 CEO will suffer more coordination failure that can be avoided by running 256 workers at 1x with one CEO running at 21x, seems plausible and even likely if you assume these are human-like minds.
Robin’s scenario is 256 workers and 64 line bosses running at 1x plus one CEO running at 21x who directly oversees the line bosses not the workers (you can check for yourself here). This offers no coordination advantages over having 16 workers, 4 line bosses and one CEO all running at 16x, as far as I can tell.
Sorry I misremembered the example, thank you for the correction.
I’m not sure about that. What I’ve seen of the management literature suggests that the complexity of coordination and oversight problems is strongly nonlinear on the number of workers overseen, and clocking faster would produce only linear improvements. It might still make sense to run the CEO at a faster clockspeed, since that role has to deal with additional coordination problems that aren’t entirely under the company’s control, but this line of thought suggests to me that smaller numbers of more individually productive workers would be more efficient overall than maximizing the workforce size.
Robin may have been assuming abundant memory and scarce CPU time? I agree though that unless memory costs are very low this is a problem in the examples.
He’s not saving on CPU time (i.e., total number of instructions executed), but substituting more, slower processors for fewer, faster processors and also using more memory. We don’t see a lot of this today. For example render farms and data centers all use essentially the fastest CPUs available. Some operations might back off a few notches from the bleeding edge in order to save money, but it’s not even close to 2x much less 21x. My earlier “doesn’t seem plausible” may be too strong, but I don’t understand why Robin seems to be predicting this as the most likely scenario. If he has some specific reasons why the economics will likely work out this way, I’d very much like to see it.
We see plenty of this today. Every processor you use with multiple slower cores rather than a single core screaming at 4ghz is making the slow parallel vs fast serial tradeoff. Processor migration and power-saving modes are other examples where the tradeoff is made dynamically ARM processors are hugely abundant in embedded and mobile spaces, and the ARM design is an example of trading off CPU time for other things like reduced transistor count or (especially) power consumption. ARM or Atom chips are making inroads into datacenters because power consumption & cooling are becoming such issues, and we can expect parallelisation to continue for power saving.
Hmm, apparently my knowledge of server hardware was a bit outdated. ARM processors being used by data centers are running at about 1.5 ghz, and it looks like with extreme overclocking it’s possible to push x86 processors up to 8 ghz which gives a factor of about 5x. So probably there will be some significant difference between the fastest and slowest uploads, and 21x may not be totally implausible.
One benefit of running on a lower speed is that you can interact with things farther away from you while it still seems instantaneous. although i have no idea why that would be more important for the workers than for the boss.