Bulk synchronous parallel explained

The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm.

History

The BSP model was developed by Leslie Valiant of Harvard University during the 1980s. The definitive article was published in 1990.[1]

Between 1990 and 1992, Leslie Valiant and Bill McColl of Oxford University worked on ideas for a distributed memory BSP programming model, in Princeton and at Harvard. Between 1992 and 1997, McColl led a large research team at Oxford that developed various BSP programming libraries, languages and tools, and also numerous massively parallel BSP algorithms, including many early examples of high-performance communication-avoiding parallel algorithms[2] and recursive "immortal" parallel algorithms that achieve the best possible performance and optimal parametric tradeoffs.[3]

With interest and momentum growing, McColl then led a group from Oxford, Harvard, Florida, Princeton, Bell Labs, Columbia and Utrecht that developed and published the BSPlib Standard for BSP programming in 1996.[4]

Valiant developed an extension to the BSP model in the 2000s, leading to the publication of the Multi-BSP model in 2011.[5]

In 2017, McColl developed a major new extension of the BSP model that provides fault tolerance and tail tolerance for large-scale parallel computations in AI, Analytics and high-performance computing (HPC).[6] See also [7]

The BSP model

Overview

A BSP computer consists of the following:

This is commonly interpreted as a set of processors that may follow different threads of computation, with each processor equipped with fast local memory and interconnected by a communication network.

BSP algorithms rely heavily on the third feature; a computation proceeds in a series of global supersteps, which consists of three components:

The computation and communication actions do not have to be ordered in time. Communication typically takes the form of the one-sided PUT and GET remote direct memory access (RDMA) calls rather than paired two-sided send and receive message-passing calls.

The barrier synchronization concludes the superstep—it ensures that all one-sided communications are properly concluded. Systems based on two-sided communication include this synchronization cost implicitly for every message sent. The barrier synchronization method relies on the BSP computer's hardware facility. In Valiant's original paper, this facility periodically checks if the end of the current superstep is reached globally. The period of this check is denoted by

L

.[1]

The BSP model is also well-suited for automatic memory management for distributed-memory computing through over-decomposition of the problem and oversubscription of the processors. The computation is divided into more logical processes than there are physical processors, and processes are randomly assigned to processors. This strategy can be shown statistically to lead to almost perfect load balancing, both of work and communication.

Communication

In many parallel programming systems, communications are considered at the level of individual actions, such as sending and receiving a message or memory-to-memory transfer. This is difficult to work with since there are many simultaneous communication actions in a parallel program, and their interactions are typically complex. In particular, it is difficult to say much about the time any single communication action will take to complete.

The BSP model considers communication actions en masse. This has the effect that an upper bound on the time taken to communicate a set of data can be given. BSP considers all communication actions of a superstep as one unit and assumes all individual messages sent as part of this unit have a fixed size.

The maximum number of incoming or outgoing messages for a superstep is denoted by

h

. The ability of a communication network to deliver data is captured by a parameter

g

, defined such that it takes time

hg

for a processor to deliver

h

messages of size 1.

A message of length

m

obviously takes longer to send than a message of size 1. However, the BSP model does not make a distinction between a message length of

m

or

m

messages of length 1. In either case, the cost is said to be

mg

.

The parameter

g

depends on the following:

In practice,

g

is determined empirically for each parallel computer. Note that

g

is not the normalized single-word delivery time but the single-word delivery time under continuous traffic conditions.

Barriers

The one-sided communication of the BSP model requires barrier synchronization. Barriers are potentially costly but avoid the possibility of deadlock or livelock, since barriers cannot create circular data dependencies. Tools to detect them and deal with them are unnecessary. Barriers also permit novel forms of fault tolerance.

The cost of barrier synchronization is influenced by a couple of issues:

The cost of a barrier synchronization is denoted by

l

. Note that

l<L

if the synchronization mechanism of the BSP computer is as suggested by Valiant.[1] In practice, a value of

l

is determined empirically.

On large computers, barriers are expensive, and this is increasingly so on large scales. There is a large body of literature on removing synchronization points from existing algorithms in the context of BSP computing and beyond. For example, many algorithms allow for the local detection of the global end of a superstep simply by comparing local information to the number of messages already received. This drives the cost of global synchronization, compared to the minimally required latency of communication, to zero.[8] Yet also this minimal latency is expected to increase further for future supercomputer architectures and network interconnects; the BSP model, along with other models for parallel computation, require adaptation to cope with this trend. Multi-BSP is one BSP-based solution.[5]

Algorithmic cost

The cost of a superstep is determined as the sum of three terms:

Thus, the cost of one superstep for

p

processors:
p
max
i=1

(wi)+

p
max
i=1

(hig)+l

where

wi

is the cost for the local computation in process

i

, and

hi

is the number of messages sent or received by process

i

. Note that homogeneous processors are assumed here. It is more common for the expression to be written as

w+hg+l

where

w

and

h

are maxima. The cost of an entire BSP algorithm is the sum of the cost of each superstep.

W+Hg+Sl=

S
\sum
s=1

ws+g

S
\sum
s=1

hs+Sl

where

S

is the number of supersteps.

W

,

H

, and

S

are usually modeled as functions that vary with problem size. These three characteristics of a BSP algorithm are usually described in terms of asymptotic notation, e.g.,

H\inO(n/p)

.

Extensions and uses

Interest in BSP has soared, with Google adopting it as a major technology for graph analytics at massive scale via Pregel and MapReduce. Also, with the next generation of Hadoop decoupling the MapReduce model from the rest of the Hadoop infrastructure, there are now active open-source projects to add explicit BSP programming, as well as other high-performance parallel programming models, on top of Hadoop. Examples are Apache Hama and Apache Giraph.[9]

BSP has been extended by many authors to address concerns about BSP's unsuitability for modelling specific architectures or computational paradigms. One example of this is the decomposable BSP model. The model has also been used in the creation of a number of new programming languages and interfaces, such as Bulk Synchronous Parallel ML (BSML), BSPLib, Apache Hama,[9] and Pregel.[10]

Notable implementations of the BSPLib standard are the Paderborn University BSP library[11] and the Oxford BSP Toolset by Jonathan Hill.[12] Modern implementations include BSPonMPI[13] (which simulates BSP on top of the Message Passing Interface), and MulticoreBSP[14] [15] (a novel implementation targeting modern shared-memory architectures). MulticoreBSP for C is especially notable for its capability of starting nested BSP runs, thus allowing for explicit Multi-BSP programming.

See also

External links

Notes and References

  1. Leslie G. Valiant, A bridging model for parallel computation, Communications of the ACM, Volume 33 Issue 8, Aug. 1990 http://portal.acm.org/citation.cfm?id=79173.79181
  2. W F McColl. Scalable Computing. Computer Science Today: Recent Trends and Developments. J van Leeuwen (editor). LNCS Volume 1000, Springer-Verlag pp.46-61 (1995)https://link.springer.com/chapter/10.1007/BFb0015236
  3. W F McColl and A Tiskin. Memory-efficient matrix multiplication in the BSP model. Algorithmica 24(3) pp.287-297 (1999)https://link.springer.com/article/10.1007/PL00008264
  4. J M D Hill, W F McColl, D C Stefanescu, M W Goudreau, K Lang, S B Rao, T Suel, T Tsantilas and R H Bisseling. BSPlib: The BSP Programming Library. Parallel Computing 24 (14) pp. 1947-1980 (1998) https://dl.acm.org/doi/abs/10.1016/S0167-8191%2898%2900093-3
  5. Valiant, L. G. (2011). A bridging model for multi-core computing. Journal of Computer and System Sciences, 77(1), 154-166 https://dx.doi.org/10.1016/j.jcss.2010.06.012
  6. A Bridging Model for High Performance Cloud Computing by Bill McColl in 18th SIAM Conference on Parallel Processing for Scientific Computing (2018), http://meetings.siam.org/sess/dsp_talk.cfm?p=88973 .
  7. Bill McColl. Mathematics, Models and Architectures. Chapter 1, pp. 6-53. Mathematics for Future Computing and Communications, edited by Liao Heng and Bill McColl. Cambridge University Press (2022). https://www.cambridge.org/core/books/abs/mathematics-for-future-computing-and-communications/mathematics-models-and-architectures/57F4ABDEC50A67BD0CE4911779933541
  8. Alpert, R., & Philbin, J. (1997). cBSP: Zero-cost synchronization in a modified BSP model. NEC Research Institute, 4 Independence Way, Princeton NJ, 8540, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.7784&amp;rep=rep1&amp;type=pdf.
  9. http://hama.apache.org/ Apache Hama
  10. http://dl.acm.org/citation.cfm?id=1582723 Pregel
  11. The Paderborn University BSP (PUB) Library - Design, Implementation and PerformanceHeinz Nixdorf Institute, Department of Computer Science, University of Paderborn, Germany, technical report .
  12. Jonathan Hill: The Oxford BSP Toolset, 1998.
  13. Wijnand J. Suijlen: BSPonMPI, 2006.
  14. MulticoreBSP for C: a high-performance library for shared-memory parallel programmingby A. N. Yzelman, R. H. Bisseling, D. Roose, and K. Meerbergen in International Journal of Parallel Programming, in press (2013), doi:10.1109/TPDS.2013.31.
  15. An Object-Oriented Bulk Synchronous Parallel Library for Multicore Programmingby A. N. Yzelman & Rob H. Bisseling in Concurrency and Computation: Practice and Experience 24(5), pp. 533-553 (2012), doi:10.1002/cpe.1843.