This isn’t that important, but I think the idea of using an exponential parallelization penalty is common in the economics literature. I specifically used 0.4 as around the harshest penalty I’ve heard of. I believe this number comes from some studies on software engineering where they found something like this.
I’m currently skeptical that toy models of DAGs/tech trees will add much value over:
Looking at how parallelized AI R&D is right now.
Looking at what people typically find in the economics literature.
(Separately AIs might be notably better at coordinating than humans are which might change things substantially. Toy models of this might be helpful.)
This isn’t that important, but I think the idea of using an exponential parallelization penalty is common in the economics literature. I specifically used 0.4 as around the harshest penalty I’ve heard of. I believe this number comes from some studies on software engineering where they found something like this.
I’m currently skeptical that toy models of DAGs/tech trees will add much value over:
Looking at how parallelized AI R&D is right now.
Looking at what people typically find in the economics literature.
(Separately AIs might be notably better at coordinating than humans are which might change things substantially. Toy models of this might be helpful.)