Within theoretical computer science, the Sun–Ni law (or Sun and Ni's law, also known as memory-bounded speedup) is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity. In general, as a system grows in computational power, the problems run on the system increase in size. Analogous to Amdahl's law, which says that the problem size remains constant as system sizes grow, and Gustafson's law, which proposes that the problem size should scale but be bound by a fixed amount of time, the Sun–Ni law states the problem size should scale but be bound by the memory capacity of the system. Sun–Ni law [1] [2] was initially proposed by Xian-He Sun and Lionel Ni at the Proceedings of IEEE Supercomputing Conference 1990.
With the increasing disparity between CPU speed and memory data access latency, application execution time often depends on the memory speed of the system.[3] As predicted by Sun and Ni, data access has become the premier performance bottleneck for high-end computing. From this one can see the intuition behind the law; as system resources increase, applications are often bottlenecked by memory speed and bandwidth, thus an application can achieve a larger speedup by utilizing all the memory capacity in the system. The law can be applied to different layers of a memory hierarchy system, from L1 cache to main memory. Through its memory-bounded function, W=G(M), it reveals the trade-off between computing and memory in algorithm and system architecture design.
All three speedup models, Sun–Ni, Gustafson, and Amdahl, provide a metric to analyze speedup for parallel computing. Amdahl’s law focuses on the time reduction for a given fixed-size problem. Amdahl’s law states that the sequential portion of the problem (algorithm) limits the total speedup that can be achieved as system resources increase. Gustafson’s law suggests that it is beneficial to build a large-scale parallel system as the speedup can grow linearly with the system size if the problem size is scaled up to maintain a fixed execution time.[4] Yet as memory access latency often becomes the dominant factor in an application’s execution time,[5] applications may not scale up to meet the time bound constraint. The Sun–Ni law, instead of constraining the problem size by time, constrains the problem by the memory capacity of the system, or in other words bounds based on memory. The law is a generalization of Amdahl's Law and Gustafson's Law. When the memory-bounded function G(M)=1, it resolves to Amdahl's law; when the memory-bounded function G(M)=m, the number of processors, it resolves to Gustafson's law.
Let
styleW*
Suppose
stylef
style(1-f)
Let
styley=g(x)
Let:
styleW=g(M)
styleW*=g(m ⋅ M)
styleM
Thus,
W*=g(m ⋅ g-1(W))
The memory bounded speedup is then:
(1-f)W+f ⋅ g(m ⋅ g-1(W)) | |||||
|
For any power function
styleg(x)=axb
g(mx)=a(mx)b=mb ⋅ axb=mbg(x)=\bar{g}(m)g(x)
where
style\bar{g}(m)
Thus by taking the highest degree term to determine the complexity of the algorithm, one can rewrite memory bounded speedup as:
(1-f)W+f ⋅ \bar{g | |
(m)W}{(1-f)W |
+
f ⋅ \bar{g | |
(m)W}{m}} |
=
(1-f)+f ⋅ \bar{g | |
(m)}{(1-f) |
+
f ⋅ \bar{g | |
(m)}{m}} |
In this equation,
style\bar{g}(m)
Suppose
style\bar{g}(m)=1
Suppose
style\bar{g}(m)=m
Often, for simplicity and for matching the notation of Amdahl's Law and Gustafson's Law, the letter G is used to represent the memory bound function
style\bar{g}(m)
Speedupmemory-bounded=
(1-f)+f ⋅ G(n) | |||||
|
Suppose one would like to determine the memory-bounded speedup of matrix multiplication. The memory requirement of matrix multiplication is roughly
stylex=3N2
2N3
Thus we have:
g(x)=2(x/3)3/2=
2 | |
33/2 |
x3/2
\bar{g}(x)=x3/2
Thus the memory-bounded speedup is for matrix multiplication is:
(1-f)+f ⋅ \bar{g | |
(m)}{(1-f) |
+
f ⋅ \bar{g | |
(m)}{m}} |
=
(1-f)+f ⋅ m3/2 | |
(1-f)+f ⋅ m1/2 |
The following is another matrix multiplication example which illustrates the rapid increase in parallel execution time.[6] The execution time of a N X N matrix for a uniprocessor is:
O(n3)
O(n2)
Suppose a 10000-by-10000 matrix takes 800 MB of memory and can be factorized in 1 hour on a uniprocessor. Now for the scaled workload suppose is possible to factorize a 320,000-by-320,000 matrix in 32 hours. The time increase is quite large, but the increase in problem size may be more valuable for someones whose premier goal is accuracy. For example, an astrophysicist may be more interested in simulating an N-body problem with as the number of particles as large as possible. This example shows for computation intensive applications, memory capacity does not need to proportionally scale up with computing power.
The memory-bounded speedup model is the first work to reveal that memory is the performance constraint for high-end computing and presents a quantitative mathematical formulation for the trade-off between memory and computing. It is based on the memory-bounded function,W=G(n), where W is the work and thus also the computation for most applications. M is the memory requirement in terms of capacity, and G is the reuse rate. W=G(M) gives a very simple, but effective, description of the relation between computation and memory requirement. From an architecture viewpoint, the memory-bounded model suggests the size, as well as speed, of the cache(s) should match the CPU performance. Today, modern microprocessors such as the Pentium Pro, Alpha 21164, Strong Arm SA110, and Longson-3A use 80% or more of their transistors for the on-chip cache rather than computing components. From an algorithm design viewpoint, we should reduce the number of memory accesses. That is, reuse the data when it is possible. The function G gives the reuse rate. Today, the term memory bound functions has become a general term which refers to functions which involve extensive memory access.[7] Memory complexity analysis has become a discipline of computer algorithm analysis.