Pollack's Rule states that microprocessor "performance increase due to microarchitecture advances is roughly proportional to [the] square root of [the] increase in complexity". This contrasts with power consumption increase, which is roughly linearly proportional to the increase in complexity. Complexity in this context means processor logic, i.e. its area.
The rule, which is an industry term, is named for Fred Pollack, a lead engineer and fellow at Intel.
Pollack's Rule gained increasing relevance in 2008 due to the broad adoption of multi-core computing and concern expressed by businesses and individuals at the huge electricity demands of computers.
A generous interpretation of the rule allows for the case in which an ideal device could contain hundreds of low-complexity cores, each operating at very low power and together performing large amounts of (processing) work quickly. This describes a massively parallel processor array (MPPA), which is currently being used in embedded systems and hardware accelerators.
However, Pollack's Law as originally formulated was considered to be a limit on the performance improvements for single CPUs as the number of transistors increased, as well as for multiprocessor systems.
According to Moore's law, each new technology generation doubles number of transistors. This increases their speed by 40%. On the other hand, Pollack's rule implies that microarchitecture advances improve the performance by another . Therefore, the overall performance increase is roughly two-fold, while the power consumption stays the same. In practice, however, implementing new microarchitecture every new generation is difficult, so microarchitecture gains are typically less.[1]