By the end of 2013: Either the Iranian regime is overthrown by popular revolution, or there is an overt airstrike against Iran by either the US or Israel, or Israel is attacked by an Iranian nuclear weapon (70%).
Essentially seconding mattnewport: the price of gold reaches $3000USD, or inflation of the US dollar exceeds 12% in one year (65%).
The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).
The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).
When you say sequential integer operations, do you mean integer operations that really are sequential? In other words, the instructions can’t be performed in parallel because of data dependencies? If not, then this is already possible with a sufficiently wide superscalar processor or really big SIMD units.
But let’s assume you really mean sequential integer operations. The only pipeline stage in this example that can’t work on several instructions at once is the execute stage, so I’m assuming that’s where the bottleneck is here. This means that the speed is limited by the clock frequency. So, here are two ways to achieve your prediction:
Crank up the clock! Find a way to get it up to 12 GHz without burning up.
Make the execute stage capable of running much faster than the rest of the processor does. This is natural for asynchronous processors; in normal operation the integer functional units will be sitting idle most of the time waiting for input, and the bulk of the time and complexity will be in fetching the instructions, decoding them, scheduling them, and in memory access and I/O. But in your contrived scenario, the integer math units could just go hog wild and the rest of the processor would keep them fed. This can be done with current semiconductor technology, I’m pretty sure.
So, either way, kind of an ambitious prediction. I like it.
Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.
I would put faster chips using multicore running at many times current speeds will be available by 2011-2012 at near 100% certainty.
Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.
Let’s assume they put it into practice and start mass-producing processors with graphene interconnects with better-than-copper resistivity. We’ve got two things to worry about here: speed and power.
The speed of signal propagation along a wire depends on RC, the product of the resistance and the capacitance. Graphene lowers the resistance of a wire of a given size, but does nothing to lower the capacitance—that depends on the insulator surrounding the wire and the shape of the wire and its proximity to other wires. The speed gains from graphene look moderate, but significant.
The power dissipated by sending signals through wires will be most of the power of future processors, if current trends continue. Power is a barrier to clocking chips fast. We can overclock processors a lot, but you’ve got to worry about them burning up. Decreasing resistivity improves the power situation somewhat, but the bulk of the interconnect’s influence on power comes from its capacitance. Transistors have to charge and discharge the capacitance of the wires, and that takes power. So on power, graphene will help somewhat, but it’s not the slam-dunk that Valkyrie Ice is expecting.
tl;dr: Graphene interconnect sounds good, but not fantastic.
Thank you—I was wanting to write something along similar lines in response to Valkyrie Ice’s comment, but wouldn’t have ended up with something this compact.
I’ll add that clocking is just a piece of the puzzle when it comes to making computers that compute faster.
By the end of 2013: Either the Iranian regime is overthrown by popular revolution, or there is an overt airstrike against Iran by either the US or Israel, or Israel is attacked by an Iranian nuclear weapon (70%).
Essentially seconding mattnewport: the price of gold reaches $3000USD, or inflation of the US dollar exceeds 12% in one year (65%).
The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).
Android-descended smartphones outnumber iPhone-descended smartphones (60%).
The number of IMAX theaters in the US triples (40%).
http://predictionbook.com/predictions/1699
http://predictionbook.com/predictions/1375
http://predictionbook.com/predictions/1700
http://predictionbook.com/predictions/1698
http://predictionbook.com/predictions/1701
When you say sequential integer operations, do you mean integer operations that really are sequential? In other words, the instructions can’t be performed in parallel because of data dependencies? If not, then this is already possible with a sufficiently wide superscalar processor or really big SIMD units.
But let’s assume you really mean sequential integer operations. The only pipeline stage in this example that can’t work on several instructions at once is the execute stage, so I’m assuming that’s where the bottleneck is here. This means that the speed is limited by the clock frequency. So, here are two ways to achieve your prediction:
Crank up the clock! Find a way to get it up to 12 GHz without burning up.
Make the execute stage capable of running much faster than the rest of the processor does. This is natural for asynchronous processors; in normal operation the integer functional units will be sitting idle most of the time waiting for input, and the bulk of the time and complexity will be in fetching the instructions, decoding them, scheduling them, and in memory access and I/O. But in your contrived scenario, the integer math units could just go hog wild and the rest of the processor would keep them fed. This can be done with current semiconductor technology, I’m pretty sure.
So, either way, kind of an ambitious prediction. I like it.
Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.
I would put faster chips using multicore running at many times current speeds will be available by 2011-2012 at near 100% certainty.
This seems to be still very far from application, a quick search on your claim turned up only this paper that isn’t cited by anybody yet, publicized in a few popular articles.
Let’s assume they put it into practice and start mass-producing processors with graphene interconnects with better-than-copper resistivity. We’ve got two things to worry about here: speed and power.
The speed of signal propagation along a wire depends on RC, the product of the resistance and the capacitance. Graphene lowers the resistance of a wire of a given size, but does nothing to lower the capacitance—that depends on the insulator surrounding the wire and the shape of the wire and its proximity to other wires. The speed gains from graphene look moderate, but significant.
The power dissipated by sending signals through wires will be most of the power of future processors, if current trends continue. Power is a barrier to clocking chips fast. We can overclock processors a lot, but you’ve got to worry about them burning up. Decreasing resistivity improves the power situation somewhat, but the bulk of the interconnect’s influence on power comes from its capacitance. Transistors have to charge and discharge the capacitance of the wires, and that takes power. So on power, graphene will help somewhat, but it’s not the slam-dunk that Valkyrie Ice is expecting.
tl;dr: Graphene interconnect sounds good, but not fantastic.
Thank you—I was wanting to write something along similar lines in response to Valkyrie Ice’s comment, but wouldn’t have ended up with something this compact.
I’ll add that clocking is just a piece of the puzzle when it comes to making computers that compute faster.